var-202206-1428
Vulnerability from variot
In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Bugs fixed (https://bugzilla.redhat.com/):
2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
eventsource: Exposure of Sensitive Information (CVE-2022-1650)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
node-forge: Signature verification leniency in checking
digestAlgorithm
structure can lead to signature forgery (CVE-2022-24771) -
node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
node-forge: Signature verification leniency in checking
DigestInfo
structure (CVE-2022-24773) -
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod
1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified
1973317 - libceph: read_partial_message and bad crc/signature errors
1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
2027724 - Warning log for rook-ceph-toolbox in ocs-operator log
2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056697 - odf-csi-addons-operator subscription failed while using custom catalog source
2058211 - Add validation for CIDR field in DRPolicy
2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced
2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10
2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE
2063691 - [GSS] [RFE] Add termination policy to s3 route
2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2066514 - OCS operator to install Ceph prometheus alerts instead of Rook
2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm
structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo
structure
2069314 - OCS external mode should allow specifying names for all Ceph auth principals
2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
2069812 - must-gather: rbd_vol_and_snap_info collection is broken
2069815 - must-gather: essential rbd mirror command outputs aren't collected
2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11
2071494 - [DR] Applications are not getting deployed
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty
2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled
2075426 - 4.10 must gather is not available after GA of 4.10
2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running
2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost
2077242 - vg-manager missing permissions
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2079866 - [DR] odf-multicluster-console is in CLBO state
2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"'
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2081680 - Add the LVM Operator into the Storage category in OperatorHub
2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation
2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer
2082497 - Do not filter out removable devices
2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)
2083441 - LVM operator should deploy the volumesnapshotclass resource
2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status
2083993 - Add missing pieces for storageclassclaim
2084041 - [Console Migration] Link-able storage system name directs to blank page
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided"
2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates
2084546 - [Console Migration] Provider details absent under backing store in UI
2084565 - [Console Migration] The creation of new backing store , directs to a blank page
2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information
2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred
2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace
2086557 - Thin pool in lvm operator doesn't use all disks
2086675 - [UI]No option to "add capacity" via the Installed Operators tab
2086982 - ODF 4.11 deployment is failing
2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm
2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2087107 - Set default storage class if none is set
2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message
2087675 - ocs-metrics-exporter pod crashes on odf v4.11
2087732 - [Console Migration] Events page missing under new namespace store
2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI
2088359 - Send VG Metrics even if storage is being consumed from thinPool alone
2088380 - KMS using vault on standalone MCG cluster is not enabled
2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint
2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook
2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2
2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts
2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9.
2089552 - [MS v2] Cannot create StorageClassClaim
2089567 - [Console Migration] Improve the styling of Various Components
2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims .
2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket.
2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed
2090278 - [LVMO] Some containers are missing resource requirements and limits
2090314 - [LVMO] CSV is missing some useful annotations
2090953 - [MCO] DRCluster created under default namespace
2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics
2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool.
2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference
2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external
2091894 - All backingstores in cluster spontaneously change their own secret
2091951 - [GSS] OCS pods are restarting due to liveness probe failure
2091998 - Volume Snapshots not work with external restricted mode
2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)
2092349 - Enable zeroing on the thin-pool during creation
2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase
2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found"
2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically
2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2094179 - MCO fails to create DRClusters when replication mode is synchronous
2094853 - [Console Migration] Description under storage class drop down in add capacity is missing .
2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2095155 - Use tool black
to format the python external script
2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster
2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time
2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page
2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened
2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
2096937 - Storage - Data Foundation: i18n misses
2097216 - Collect StorageClassClaim details in must-gather
2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2097305 - Add translations for ODF 4.11
2098121 - Managed ODF not getting detected
2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled
2099581 - StorageClassClaim with encryption gets into Failed state
2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project
2099646 - Block pool list page kebab action menu is showing empty options
2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab
2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile
2099965 - rbd: provide option to disable setting metadata on RBD images
2100326 - [ODF to ODF] Volume snapshot creation failed
2100352 - Make lvmo pod labels more uniform
2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install
2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection
2103818 - Restored snapshot don't have any content
2104833 - Need to update configmap for IBM storage odf operator GA
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- References:
https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Release osp-director-operator images
Security Fix(es):
- CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read [important]
-
CVE-2021-41103 golang: containerd: insufficiently restricted permissions on container root and plugin directories [medium]
-
Solution:
OSP 16.2.z Release - OSP Director Operator Containers
- Summary:
This is an updated release of the Self Node Remediation Operator. The Self Node Remediation Operator replaces the Poison Pill Operator, and is delivered by Red Hat Workload Availability. Description:
The Self Node Remediation Operator works in conjunction with the Machine Health Check or the Node Health Check Operators to provide automatic remediation of unhealthy nodes by rebooting them. This minimizes downtime for stateful applications and RWO volumes, as well as restoring compute capacity in the event of transient failures.
Security Fix(es):
- golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, see the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- Description:
Multicluster engine for Kubernetes 2.1 images
Multicluster engine for Kubernetes provides the foundational components that are necessary for the centralized management of multiple Kubernetes-based clusters across data centers, public clouds, and private clouds.
You can use the engine to create new Red Hat OpenShift Container Platform clusters or to bring existing Kubernetes-based clusters under management by importing them. After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy.
Security fixes:
-
CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
-
CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
-
CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
Bug fixes:
-
MCE 2.1.0 Images (BZ# 2090907)
-
cluster-proxy-agent not able to startup (BZ# 2109394)
-
Create cluster button skips Infrastructure page, shows blank page (BZ# 2110713)
-
AWS Icon sometimes doesn't show up in create cluster wizard (BZ# 2110734)
-
Infrastructure descriptions in create cluster catalog should be consistent and clear (BZ# 2110811)
-
The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page (BZ# 2111483)
-
hypershift cluster creation -> not all agent labels are shown in the node pools screen (BZ# 2112326)
-
CIM - SNO expansion, worker node status incorrect (BZ# 2114735)
-
Wizard fields are not pre-filled after picking credentials (BZ# 2117163)
-
ManagedClusterImageRegistry CR is wrong in pure MCE env
-
Solution:
For multicluster engine for Kubernetes, see the following documentation for details on how to install the images:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce
- Bugs fixed (https://bugzilla.redhat.com/):
2090907 - MCE 2.1.0 Images 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal 2109394 - cluster-proxy-agent not able to startup 2111483 - The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page 2112326 - [UI] hypershift cluster creation -> not all agent labels are shown in the node pools screen 2114735 - [UI] CIM - SNO expansion, worker node status incorrect 2117163 - [UI] Wizard fields are not pre-filled after picking credentials 2117447 - [ACM 2.6] ManagedClusterImageRegistry CR is wrong in pure MCE env
- This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience. Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN 2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
OpenSSL 1.0.2 users should upgrade to 1.0.2zf (premium support customers only) OpenSSL 1.1.1 users should upgrade to 1.1.1p OpenSSL 3.0 users should upgrade to 3.0.4
This issue was reported to OpenSSL on the 20th May 2022. It was found by Chancen of Qingteng 73lab. A further instance of the issue was found by Daniel Fiala of OpenSSL during a code review of the script. The fix for these issues was developed by Daniel Fiala and Tomas Mraz from OpenSSL.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html
OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.
Users of these versions should upgrade to OpenSSL 3.0 or 1.1.1.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20220621.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html . Summary:
The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key 2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key 2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key 2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly, for detailed release notes:
https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html
For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node
- Bugs fixed (https://bugzilla.redhat.com/):
1937609 - VM cannot be restarted
1945593 - Live migration should be blocked for VMs with host devices
1968514 - [RFE] Add cancel migration action to virtctl
1993109 - CNV MacOS Client not signed
1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side
2001385 - no "name" label in virt-operator pod
2009793 - KBase to clarify nested support status is missing
2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate
2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)
2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation
2026357 - Migration in sequence can be reported as failed even when it succeeded
2029349 - cluster-network-addons-operator does not serve metrics through HTTPS
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2031857 - Add annotation for URL to download the image
2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2035344 - kubemacpool-mac-controller-manager not ready
2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered
2039976 - Pod stuck in "Terminating" state when removing VM with kernel boot and container disks
2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI
2041467 - [SSP] Support custom DataImportCron creating in custom namespaces
2042402 - LiveMigration with postcopy misbehave when failure occurs
2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists
2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?
2051899 - 4.11.0 containers
2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn't configure ip nat rules
2052466 - Event does not include reason for inability to live migrate
2052689 - Overhead Memory consumption calculations are incorrect
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056467 - virt-template-validator pods getting scheduled on the same node
2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long
2057310 - qemu-guest-agent does not report information due to selinux denials
2058149 - cluster-network-addons-operator deployment's MULTUS_IMAGE is pointing to brew image
2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs
2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state
2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool
2060585 - [SNO] Failed to find the virt-controller leader pod
2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled.
2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource
2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace
2063792 - No DataImportCron for CentOS 7
2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2064936 - Migration of vm from VMware reports pvc not large enough
2065014 - Feature Highlights in CNV 4.10 contains links to 4.7
2065019 - "Running VMs per template" in the new overview tab counts VMs that are not running
2066768 - [CNV-4.11-HCO] User Cannot List Resource "namespaces" in API group
2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom
2069287 - Two annotations for VM Template provider name
2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2070864 - non-privileged user cannot see catalog tiles
2071488 - "Migrate Node to Node" is confusing.
2071549 - [rhel-9] unable to create a non-root virt-launcher based VM
2071611 - Metrics documentation generators are missing metrics/recording rules
2071921 - Kubevirt RPM is not being built
2073669 - [rhel-9] VM fails to start
2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream
2073982 - [CNV-4.11-RHEL9] 'virtctl' binary fails with 'rc1' with 'virtctl version' command
2074337 - VM created from registry cannot be started
2075200 - VLAN filtering cannot be configured with Intel X710
2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff
2076292 - Upgrade from 4.10.1->4.11 using nightly channel, is not completing with error "could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR"
2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file
2076790 - Alert SSPDown is constantly in Firing state
2076908 - clicking on a template in the Running VMs per Template card leads to 404
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2078700 - Windows template boot source should be blank
2078703 - [RFE] Please hide the user defined password when customizing cloud-init
2078709 - VM conditions column have wrong key/values
2078728 - Common template rootDisk is not named correctly
2079366 - rootdisk is not able to edit
2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM
2079783 - Actions are broken in topology view
2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck
2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod
2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop
2080833 - Missing cloud init script editor in the scripts tab
2080835 - SSH key is set using cloud init script instead of new api
2081182 - VM SSH command generated by UI points at api VIP
2081202 - cloud-init for Windows VM generated with corrupted "undefined" section
2081409 - when viewing a common template details page, user need to see the message "can't edit common template" on all tabs
2081671 - SSH service created outside the UI is not discoverable
2081831 - [RFE] Improve disk hotplug UX
2082008 - LiveMigration fails due to loss of connection to destination host
2082164 - Migration progress timeout expects absolute progress
2082912 - [CNV-4.11] HCO Being Unable to Reconcile State
2083093 - VM overview tab is crashed
2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?
2083100 - Something keeps loading in the ?node selector? modal
2083101 - ?Restore default settings? never become available while editing CPU/Memory
2083135 - VM fails to schedule with vTPM in spec
2083256 - SSP Reconcile logging improvement when CR resources are changed
2083595 - [RFE] Disable VM descheduler if the VM is not live migratable
2084102 - [e2e] Many elements are lacking proper selector like 'data-test-id' or 'data-test'
2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails
2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field
2084431 - User credentials for ssh is not in correct format
2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab.
2084532 - Console is crashed while detaching disk
2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory)
2085320 - Tolerations rules is not adding correctly
2085322 - Not able to stop/restart VM if the VM is staying in "Starting"
2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode
2086278 - Cloud init script edit add " hostname='' " when is should not be added
2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode
2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode
2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode
2086294 - [dark mode] Can't see the number inside the donut chart in VMs per template card
2086303 - non-priv user can't create VM when namespace is not selected
2086479 - some modals use ?Save? and some modals use ?Submit?
2086486 - cluster overview getting started card include old information
2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend
2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard
2086803 - When clonnig a template we need to update vm labels and annotaions to match new template
2086825 - VM restore PVC uses exact source PVC request size
2086849 - Create from YAML example is not runnable
2087188 - When VM is stopped - adding disk failed to show
2087189 - When VM is stopped - adding disk failed to show
2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2087546 - "Quick Starts" is missing in Getting started card
2087547 - Activity and Status card are missing in Virtualization Overview
2087559 - template in "VMs per template" should take user to vm list page
2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists
2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines?
2087577 - "VMs per template" load time is a bit long
2087578 - Terminology "VM" should be "Virtual Machine" in all places
2087582 - Remove VMI and MTV from the navigation
2087583 - [RFE] Show more info about boot source in template list
2087584 - Template provider should not be mandatory
2087587 - Improve the descriptive text in the kebab menu of template
2087589 - Red icons shows in storage disk source selection without a good reason
2087590 - [REF] "Upload a new file to a PVC" should not open the form in a new tab
2087593 - "Boot method" is not a good name in overview tab
2087603 - Align details card for single VM overview with the design doc
2087616 - align the utilization card of single VM overview with the design
2087701 - [RFE] Missing a link to VMI from running VM details page
2087717 - Message when editing template boot source is wrong
2088034 - Virtualization Overview crashes when a VirtualMachine has no labels
2088355 - disk modal shows all storage classes as default
2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2088379 - Create VM from catalog does not respect the storageclass of the template's boot source
2088407 - Missing create button in the template list
2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context
2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11
2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error
2088849 - "dataimportcrontemplate.kubevirt.io/enable" field does not do any validation
2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco
2089271 - Virtualization appears twice in sidebar
2089327 - add network modal crash when no networks available
2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page
2089477 - [RFE] Allow upload source when adding VM disk
2089700 - Drive column in Disks card of Overview page has duplicated values
2089745 - When removing all disks from customize wizard app crashes
2089789 - Add windows drivers disk is missing when template is not windows
2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user
2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages
2089840 - Cant create snapshot if VM is without disks
2089877 - Utilization card on single VM overview - timespan menu lacks 5min option
2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update
2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics
2089954 - Details card on single VM overview - VNC console has grey padding
2089963 - Details card on single VM overview - Operating system info is not available
2089967 - Network Interfaces card on single VM overview - name tooltip lacks info
2089970 - Network Interfaces card on single VM overview - IP tooltip
2089972 - Disks card on single VM overview -typo
2089979 - Single VM Details - CPU|Memory edit icon misplaced
2089982 - Single VM Details - SSH modal has redundant VM name
2090035 - Alert card is missing in single VM overview
2090036 - OS should be "Operating system" and host should be "hostname" in single vm overview
2090037 - Add template link in single vm overview details card
2090038 - The update field under the version in overview should be consistent with the operator page
2090042 - Move the edit button close to the text for "boot order" and "ssh access"
2090043 - "No resource selected" in vm boot order
2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page
2090048 - "Boot mode" should be editable while VM is running
2090054 - Services ?kubernetes" and "openshift" should not be listing in vm details
2090055 - Add link to vm template in vm details page
2090056 - "Something went wrong" shows on VM "Environment" tab
2090057 - "?" icon is too big in environment and disk tab
2090059 - Failed to add configmap in environment tab due to validate error
2090064 - Miss "remote desktop" in console dropdown list for windows VM
2090066 - [RFE] Improve guest login credentials
2090068 - Make the "name" and "Source" column wider in vm disk tab
2090131 - Key's value in "add affinity rule" modal is too small
2090350 - memory leak in virt-launcher process
2091003 - SSH service is not deleted along the VM
2091058 - After VM gets deleted, the user is redirected to a page with a different namespace
2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec.
2091406 - wrong template namespace label when creating a vm with wizard
2091754 - Scheduling and scripts tab should be editable while the VM is running
2091755 - Change bottom "Save" to "Apply" on cloud-init script form
2091756 - The root disk of cloned template should be editable
2091758 - "OS" should be "Operating system" in template filter
2091760 - The provider should be empty if it's not set during cloning
2091761 - Miss "Edit labels" and "Edit annotations" in template kebab button
2091762 - Move notification above the tabs in template details page
2091764 - Clone a template should lead to the template details
2091765 - "Edit bootsource" is keeping in load in template actions dropdown
2091766 - "Are you sure you want to leave this page?" pops up when click the "Templates" link
2091853 - On Snapshot tab of single VM "Restore" button should move to the kebab actions together with the Delete
2091863 - BootSource edit modal should list affected templates
2091868 - Catalog list view has two columns named "BootSource"
2091889 - Devices should be editable for customize template
2091897 - username is missing in the generated ssh command
2091904 - VM is not started if adding "Authorized SSH Key" during vm creation
2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root
2091940 - SSH is not enabled in vm details after restart the VM
2091945 - delete a template should lead to templates list
2091946 - Add disk modal shows wrong units
2091982 - Got a lot of "Reconciler error" in cdi-deployment log after adding custom DataImportCron to hco
2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank
2092052 - Virtualization should be omitted in Calatog breadcrumbs
2092071 - Getting started card in Virtualization overview can not be hidden.
2092079 - Error message stays even when problematic field is dismissed
2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO
2092228 - Ensure Machine Type for new VMs is 8.6
2092230 - [RFE] Add indication/mark to deprecated template
2092306 - VM is stucking with WaitingForVolumeBinding if creating via "Boot from CD"
2092337 - os is empty in VM details page
2092359 - [e2e] data-test-id includes all pvc name
2092654 - [RFE] No obvious way to delete the ssh key from the VM
2092662 - No url example for rhel and windows template
2092663 - no hyperlink for URL example in disk source "url"
2092664 - no hyperlink to the cdi uploadproxy URL
2092781 - Details card should be removed for non admins.
2092783 - Top consumers' card should be removed for non admins.
2092787 - Operators links should be removed from Getting started card
2092789 - "Learn more about Operators" link should lead to the Red Hat documentation
2092951 - ?Edit BootSource? action should have more explicit information when disabled
2093282 - Remove links to 'all-namespaces/' for non-privileged user
2093691 - Creation flow drawer left padding is broken
2093713 - Required fields in creation flow should be highlighted if empty
2093715 - Optional parameters section in creation flow is missing bottom padding
2093716 - CPU|Memory modal button should say "Restore template settings?
2093772 - Add a service in environment it reminds a pending change in boot order
2093773 - Console crashed if adding a service without serial number
2093866 - Cannot create vm from the template vm-template-example
2093867 - OS for template 'vm-template-example' should matching the version of the image
2094202 - Cloud-init username field should have hint
2094207 - Cloud-init password field should have auto-generate option
2094208 - SSH key input is missing validation
2094217 - YAML view should reflect shanges in SSH form
2094222 - "?" icon should be placed after red asterisk in required fields
2094323 - Workload profile should be editable in template details page
2094405 - adding resource on enviornment isnt showing on disks list when vm is running
2094440 - Utilization pie charts figures are not based on current data
2094451 - PVC selection in VM creation flow does not work for non-priv user
2094453 - CD Source selection in VM creation flow is missing Upload option
2094465 - Typo in Source tooltip
2094471 - Node selector modal for non-privileged user
2094481 - Tolerations modal for non-privileged user
2094486 - Add affinity rule modal
2094491 - Affinity rules modal button
2094495 - Descheduler modal has same text in two lines
2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id
2094665 - Dedicated Resources modal for non-privileged user
2094678 - Secrets and ConfigMaps can't be added to Windows VM
2094727 - Creation flow should have VM info in header row
2094807 - hardware devices dropdown has group title even with no devices in cluster
2094813 - Cloudinit password is seen in wizard
2094848 - Details card on Overview page - 'View details' link is missing
2095125 - OS is empty in the clone modal
2095129 - "undefined" appears in rootdisk line in clone modal
2095224 - affinity modal for non-privileged users
2095529 - VM migration cancelation in kebab action should have shorter name
2095530 - Column sizes in VM list view
2095532 - Node column in VM list view is visible to non-privileged user
2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime
2095570 - Details tab of VM should not have Node info for non-privileged user
2095573 - Disks created as environment or scripts should have proper label
2095953 - VNC console controls layout
2095955 - VNC console tabs
2096166 - Template "vm-template-example" is binding with namespace "default"
2096206 - Inconsistent capitalization in Template Actions
2096208 - Templates in the catalog list is not sorted
2096263 - Incorrectly displaying units for Disks size or Memory field in various places
2096333 - virtualization overview, related operators title is not aligned
2096492 - Cannot create vm from a cloned template if its boot source is edited
2096502 - "Restore template settings" should be removed from template CPU editor
2096510 - VM can be created without any disk
2096511 - Template shows "no Boot Source" and label "Source available" at the same time
2096620 - in templates list, edit boot reference kebab action opens a modal with different title
2096781 - Remove boot source provider while edit boot source reference
2096801 - vnc thumbnail in virtual machine overview should be active on page load
2096845 - Windows template's scripts tab is crashed
2097328 - virtctl guestfs shouldn't required uid = 0
2097370 - missing titles for optional parameters in wizard customization page
2097465 - Count is not updating for 'prometheusrule' component when metrics kubevirt_hco_out_of_band_modifications_count executed
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2098134 - "Workload profile" column is not showing completely in template list
2098135 - Workload is not showing correct in catalog after change the template's workload
2098282 - Javascript error when changing boot source of custom template to be an uploaded file
2099443 - No "Quick create virtualmachine" button for template 'vm-template-example'
2099533 - ConsoleQuickStart for HCO CR's VM is missing
2099535 - The cdi-uploadproxy certificate url should be opened in a new tab
2099539 - No storage option for upload while editing a disk
2099566 - Cloudinit should be replaced by cloud-init in all places
2099608 - "DynamicB" shows in vm-example disk size
2099633 - Doc links needs to be updated
2099639 - Remove user line from the ssh command section
2099802 - Details card link shouldn't be hard-coded
2100054 - Windows VM with WSL2 guest fails to migrate
2100284 - Virtualization overview is crashed
2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101485 - Cloudinit should be replaced by cloud-init in all places
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer
2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2102122 - non-priv user cannot load dataSource while edit template's rootdisk
2102124 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2102125 - vm clone modal is displaying DV size instead of PVC size
2102127 - Cannot add NIC to VM template as non-priv user
2102129 - All templates are labeling "source available" in template list page
2102131 - The number of hardware devices is not correct in vm overview tab
2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2102143 - vm clone modal is displaying DV size instead of PVC size
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102543 - Add button moved to right
2102544 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102545 - VM filter has two "Other" checkboxes which are triggered together
2104617 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2106175 - All pages are crashed after visit Virtualization -> Overview
2106258 - All pages are crashed after visit Virtualization -> Overview
2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions
2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111562 - kubevirt plugin console crashed after visit vmi page
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202206-1428", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fas a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2zf" }, { "model": "bootstrap os", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "h610c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fas 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "aff a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sannav", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "aff 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h610s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fas 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "snapmanager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "h615c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1p" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "3.0.4" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "aff 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "169435" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168387" }, { "db": "PACKETSTORM", "id": "168182" }, { "db": "PACKETSTORM", "id": "168282" }, { "db": "PACKETSTORM", "id": "170165" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "168392" } ], "trust": 0.9 }, "cve": "CVE-2022-2068", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "nvd@nist.gov", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "id": "CVE-2022-2068", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 1.1, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "nvd@nist.gov", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "id": "CVE-2022-2068", "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "nvd@nist.gov", "id": "CVE-2022-2068", "trust": 1.0, "value": "CRITICAL" }, { "author": "VULMON", "id": "CVE-2022-2068", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Bugs fixed (https://bugzilla.redhat.com/):\n\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID: RHSA-2022:6156-01\nProduct: RHODF\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6156\nIssue date: 2022-08-24\nCVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\": \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRelease osp-director-operator images\n\nSecurity Fix(es):\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n[important]\n* CVE-2021-41103 golang: containerd: insufficiently restricted permissions\non container root and plugin directories [medium]\n\n3. Solution:\n\nOSP 16.2.z Release - OSP Director Operator Containers\n\n4. Summary:\n\nThis is an updated release of the Self Node Remediation Operator. The Self\nNode Remediation Operator replaces the Poison Pill Operator, and is\ndelivered by Red Hat Workload Availability. Description:\n\nThe Self Node Remediation Operator works in conjunction with the Machine\nHealth Check or the Node Health Check Operators to provide automatic\nremediation of unhealthy nodes by rebooting them. This minimizes downtime\nfor stateful applications and RWO volumes, as well as restoring compute\ncapacity in the event of transient failures. \n\nSecurity Fix(es):\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, see the CVE page(s)\nlisted in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Description:\n\nMulticluster engine for Kubernetes 2.1 images\n\nMulticluster engine for Kubernetes provides the foundational components\nthat are necessary for the centralized management of multiple\nKubernetes-based clusters across data centers, public clouds, and private\nclouds. \n\nYou can use the engine to create new Red Hat OpenShift Container Platform\nclusters or to bring existing Kubernetes-based clusters under management by\nimporting them. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. \n\nSecurity fixes:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n* CVE-2022-1705 golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n\n* CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\nBug fixes:\n\n* MCE 2.1.0 Images (BZ# 2090907)\n\n* cluster-proxy-agent not able to startup (BZ# 2109394)\n\n* Create cluster button skips Infrastructure page, shows blank page (BZ#\n2110713)\n\n* AWS Icon sometimes doesn\u0027t show up in create cluster wizard (BZ# 2110734)\n\n* Infrastructure descriptions in create cluster catalog should be\nconsistent and clear (BZ# 2110811)\n\n* The user with clusterset view permission should not able to update the\nnamespace binding with the pencil icon on clusterset details page (BZ#\n2111483)\n\n* hypershift cluster creation -\u003e not all agent labels are shown in the node\npools screen (BZ# 2112326)\n\n* CIM - SNO expansion, worker node status incorrect (BZ# 2114735)\n\n* Wizard fields are not pre-filled after picking credentials (BZ# 2117163)\n\n* ManagedClusterImageRegistry CR is wrong in pure MCE env\n\n3. Solution:\n\nFor multicluster engine for Kubernetes, see the following documentation for\ndetails on how to install the images:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2090907 - MCE 2.1.0 Images\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2109394 - cluster-proxy-agent not able to startup\n2111483 - The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page\n2112326 - [UI] hypershift cluster creation -\u003e not all agent labels are shown in the node pools screen\n2114735 - [UI] CIM - SNO expansion, worker node status incorrect\n2117163 - [UI] Wizard fields are not pre-filled after picking credentials\n2117447 - [ACM 2.6] ManagedClusterImageRegistry CR is wrong in pure MCE env\n\n5. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n5. \n\nOpenSSL 1.0.2 users should upgrade to 1.0.2zf (premium support customers only)\nOpenSSL 1.1.1 users should upgrade to 1.1.1p\nOpenSSL 3.0 users should upgrade to 3.0.4\n\nThis issue was reported to OpenSSL on the 20th May 2022. It was found by\nChancen of Qingteng 73lab. A further instance of the issue was found by\nDaniel Fiala of OpenSSL during a code review of the script. The fix for\nthese issues was developed by Daniel Fiala and Tomas Mraz from OpenSSL. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended\nsupport is available for premium support customers:\nhttps://www.openssl.org/support/contracts.html\n\nOpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. \n\nUsers of these versions should upgrade to OpenSSL 3.0 or 1.1.1. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20220621.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key\n2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key\n2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key\n2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly, for detailed release notes:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937609 - VM cannot be restarted\n1945593 - Live migration should be blocked for VMs with host devices\n1968514 - [RFE] Add cancel migration action to virtctl\n1993109 - CNV MacOS Client not signed\n1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side\n2001385 - no \"name\" label in virt-operator pod\n2009793 - KBase to clarify nested support status is missing\n2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate\n2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)\n2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation\n2026357 - Migration in sequence can be reported as failed even when it succeeded\n2029349 - cluster-network-addons-operator does not serve metrics through HTTPS\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2031857 - Add annotation for URL to download the image\n2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2035344 - kubemacpool-mac-controller-manager not ready\n2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered\n2039976 - Pod stuck in \"Terminating\" state when removing VM with kernel boot and container disks\n2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI\n2041467 - [SSP] Support custom DataImportCron creating in custom namespaces\n2042402 - LiveMigration with postcopy misbehave when failure occurs\n2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists\n2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?\n2051899 - 4.11.0 containers\n2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn\u0027t configure ip nat rules\n2052466 - Event does not include reason for inability to live migrate\n2052689 - Overhead Memory consumption calculations are incorrect\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056467 - virt-template-validator pods getting scheduled on the same node\n2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long\n2057310 - qemu-guest-agent does not report information due to selinux denials\n2058149 - cluster-network-addons-operator deployment\u0027s MULTUS_IMAGE is pointing to brew image\n2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs\n2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state\n2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool\n2060585 - [SNO] Failed to find the virt-controller leader pod\n2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. \n2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource\n2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace\n2063792 - No DataImportCron for CentOS 7\n2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2064936 - Migration of vm from VMware reports pvc not large enough\n2065014 - Feature Highlights in CNV 4.10 contains links to 4.7\n2065019 - \"Running VMs per template\" in the new overview tab counts VMs that are not running\n2066768 - [CNV-4.11-HCO] User Cannot List Resource \"namespaces\" in API group\n2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom\n2069287 - Two annotations for VM Template provider name\n2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2070864 - non-privileged user cannot see catalog tiles\n2071488 - \"Migrate Node to Node\" is confusing. \n2071549 - [rhel-9] unable to create a non-root virt-launcher based VM\n2071611 - Metrics documentation generators are missing metrics/recording rules\n2071921 - Kubevirt RPM is not being built\n2073669 - [rhel-9] VM fails to start\n2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream\n2073982 - [CNV-4.11-RHEL9] \u0027virtctl\u0027 binary fails with \u0027rc1\u0027 with \u0027virtctl version\u0027 command\n2074337 - VM created from registry cannot be started\n2075200 - VLAN filtering cannot be configured with Intel X710\n2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff\n2076292 - Upgrade from 4.10.1-\u003e4.11 using nightly channel, is not completing with error \"could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR\"\n2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file\n2076790 - Alert SSPDown is constantly in Firing state\n2076908 - clicking on a template in the Running VMs per Template card leads to 404\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2078700 - Windows template boot source should be blank\n2078703 - [RFE] Please hide the user defined password when customizing cloud-init\n2078709 - VM conditions column have wrong key/values\n2078728 - Common template rootDisk is not named correctly\n2079366 - rootdisk is not able to edit\n2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM\n2079783 - Actions are broken in topology view\n2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck\n2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod\n2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop\n2080833 - Missing cloud init script editor in the scripts tab\n2080835 - SSH key is set using cloud init script instead of new api\n2081182 - VM SSH command generated by UI points at api VIP\n2081202 - cloud-init for Windows VM generated with corrupted \"undefined\" section\n2081409 - when viewing a common template details page, user need to see the message \"can\u0027t edit common template\" on all tabs\n2081671 - SSH service created outside the UI is not discoverable\n2081831 - [RFE] Improve disk hotplug UX\n2082008 - LiveMigration fails due to loss of connection to destination host\n2082164 - Migration progress timeout expects absolute progress\n2082912 - [CNV-4.11] HCO Being Unable to Reconcile State\n2083093 - VM overview tab is crashed\n2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?\n2083100 - Something keeps loading in the ?node selector? modal\n2083101 - ?Restore default settings? never become available while editing CPU/Memory\n2083135 - VM fails to schedule with vTPM in spec\n2083256 - SSP Reconcile logging improvement when CR resources are changed\n2083595 - [RFE] Disable VM descheduler if the VM is not live migratable\n2084102 - [e2e] Many elements are lacking proper selector like \u0027data-test-id\u0027 or \u0027data-test\u0027\n2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails\n2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field\n2084431 - User credentials for ssh is not in correct format\n2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. \n2084532 - Console is crashed while detaching disk\n2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory)\n2085320 - Tolerations rules is not adding correctly\n2085322 - Not able to stop/restart VM if the VM is staying in \"Starting\"\n2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode\n2086278 - Cloud init script edit add \" hostname=\u0027\u0027 \" when is should not be added\n2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode\n2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode\n2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode\n2086294 - [dark mode] Can\u0027t see the number inside the donut chart in VMs per template card\n2086303 - non-priv user can\u0027t create VM when namespace is not selected\n2086479 - some modals use ?Save? and some modals use ?Submit?\n2086486 - cluster overview getting started card include old information\n2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend\n2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard\n2086803 - When clonnig a template we need to update vm labels and annotaions to match new template\n2086825 - VM restore PVC uses exact source PVC request size\n2086849 - Create from YAML example is not runnable\n2087188 - When VM is stopped - adding disk failed to show\n2087189 - When VM is stopped - adding disk failed to show\n2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2087546 - \"Quick Starts\" is missing in Getting started card\n2087547 - Activity and Status card are missing in Virtualization Overview\n2087559 - template in \"VMs per template\" should take user to vm list page\n2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists\n2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines?\n2087577 - \"VMs per template\" load time is a bit long\n2087578 - Terminology \"VM\" should be \"Virtual Machine\" in all places\n2087582 - Remove VMI and MTV from the navigation\n2087583 - [RFE] Show more info about boot source in template list\n2087584 - Template provider should not be mandatory\n2087587 - Improve the descriptive text in the kebab menu of template\n2087589 - Red icons shows in storage disk source selection without a good reason\n2087590 - [REF] \"Upload a new file to a PVC\" should not open the form in a new tab\n2087593 - \"Boot method\" is not a good name in overview tab\n2087603 - Align details card for single VM overview with the design doc\n2087616 - align the utilization card of single VM overview with the design\n2087701 - [RFE] Missing a link to VMI from running VM details page\n2087717 - Message when editing template boot source is wrong\n2088034 - Virtualization Overview crashes when a VirtualMachine has no labels\n2088355 - disk modal shows all storage classes as default\n2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2088379 - Create VM from catalog does not respect the storageclass of the template\u0027s boot source\n2088407 - Missing create button in the template list\n2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context\n2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11\n2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error\n2088849 - \"dataimportcrontemplate.kubevirt.io/enable\" field does not do any validation\n2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco\n2089271 - Virtualization appears twice in sidebar\n2089327 - add network modal crash when no networks available\n2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page\n2089477 - [RFE] Allow upload source when adding VM disk\n2089700 - Drive column in Disks card of Overview page has duplicated values\n2089745 - When removing all disks from customize wizard app crashes\n2089789 - Add windows drivers disk is missing when template is not windows\n2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user\n2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages\n2089840 - Cant create snapshot if VM is without disks\n2089877 - Utilization card on single VM overview - timespan menu lacks 5min option\n2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update\n2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics\n2089954 - Details card on single VM overview - VNC console has grey padding\n2089963 - Details card on single VM overview - Operating system info is not available\n2089967 - Network Interfaces card on single VM overview - name tooltip lacks info\n2089970 - Network Interfaces card on single VM overview - IP tooltip\n2089972 - Disks card on single VM overview -typo\n2089979 - Single VM Details - CPU|Memory edit icon misplaced\n2089982 - Single VM Details - SSH modal has redundant VM name\n2090035 - Alert card is missing in single VM overview\n2090036 - OS should be \"Operating system\" and host should be \"hostname\" in single vm overview\n2090037 - Add template link in single vm overview details card\n2090038 - The update field under the version in overview should be consistent with the operator page\n2090042 - Move the edit button close to the text for \"boot order\" and \"ssh access\"\n2090043 - \"No resource selected\" in vm boot order\n2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page\n2090048 - \"Boot mode\" should be editable while VM is running\n2090054 - Services ?kubernetes\" and \"openshift\" should not be listing in vm details\n2090055 - Add link to vm template in vm details page\n2090056 - \"Something went wrong\" shows on VM \"Environment\" tab\n2090057 - \"?\" icon is too big in environment and disk tab\n2090059 - Failed to add configmap in environment tab due to validate error\n2090064 - Miss \"remote desktop\" in console dropdown list for windows VM\n2090066 - [RFE] Improve guest login credentials\n2090068 - Make the \"name\" and \"Source\" column wider in vm disk tab\n2090131 - Key\u0027s value in \"add affinity rule\" modal is too small\n2090350 - memory leak in virt-launcher process\n2091003 - SSH service is not deleted along the VM\n2091058 - After VM gets deleted, the user is redirected to a page with a different namespace\n2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec. \n2091406 - wrong template namespace label when creating a vm with wizard\n2091754 - Scheduling and scripts tab should be editable while the VM is running\n2091755 - Change bottom \"Save\" to \"Apply\" on cloud-init script form\n2091756 - The root disk of cloned template should be editable\n2091758 - \"OS\" should be \"Operating system\" in template filter\n2091760 - The provider should be empty if it\u0027s not set during cloning\n2091761 - Miss \"Edit labels\" and \"Edit annotations\" in template kebab button\n2091762 - Move notification above the tabs in template details page\n2091764 - Clone a template should lead to the template details\n2091765 - \"Edit bootsource\" is keeping in load in template actions dropdown\n2091766 - \"Are you sure you want to leave this page?\" pops up when click the \"Templates\" link\n2091853 - On Snapshot tab of single VM \"Restore\" button should move to the kebab actions together with the Delete\n2091863 - BootSource edit modal should list affected templates\n2091868 - Catalog list view has two columns named \"BootSource\"\n2091889 - Devices should be editable for customize template\n2091897 - username is missing in the generated ssh command\n2091904 - VM is not started if adding \"Authorized SSH Key\" during vm creation\n2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root\n2091940 - SSH is not enabled in vm details after restart the VM\n2091945 - delete a template should lead to templates list\n2091946 - Add disk modal shows wrong units\n2091982 - Got a lot of \"Reconciler error\" in cdi-deployment log after adding custom DataImportCron to hco\n2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank\n2092052 - Virtualization should be omitted in Calatog breadcrumbs\n2092071 - Getting started card in Virtualization overview can not be hidden. \n2092079 - Error message stays even when problematic field is dismissed\n2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO\n2092228 - Ensure Machine Type for new VMs is 8.6\n2092230 - [RFE] Add indication/mark to deprecated template\n2092306 - VM is stucking with WaitingForVolumeBinding if creating via \"Boot from CD\"\n2092337 - os is empty in VM details page\n2092359 - [e2e] data-test-id includes all pvc name\n2092654 - [RFE] No obvious way to delete the ssh key from the VM\n2092662 - No url example for rhel and windows template\n2092663 - no hyperlink for URL example in disk source \"url\"\n2092664 - no hyperlink to the cdi uploadproxy URL\n2092781 - Details card should be removed for non admins. \n2092783 - Top consumers\u0027 card should be removed for non admins. \n2092787 - Operators links should be removed from Getting started card\n2092789 - \"Learn more about Operators\" link should lead to the Red Hat documentation\n2092951 - ?Edit BootSource? action should have more explicit information when disabled\n2093282 - Remove links to \u0027all-namespaces/\u0027 for non-privileged user\n2093691 - Creation flow drawer left padding is broken\n2093713 - Required fields in creation flow should be highlighted if empty\n2093715 - Optional parameters section in creation flow is missing bottom padding\n2093716 - CPU|Memory modal button should say \"Restore template settings?\n2093772 - Add a service in environment it reminds a pending change in boot order\n2093773 - Console crashed if adding a service without serial number\n2093866 - Cannot create vm from the template `vm-template-example`\n2093867 - OS for template \u0027vm-template-example\u0027 should matching the version of the image\n2094202 - Cloud-init username field should have hint\n2094207 - Cloud-init password field should have auto-generate option\n2094208 - SSH key input is missing validation\n2094217 - YAML view should reflect shanges in SSH form\n2094222 - \"?\" icon should be placed after red asterisk in required fields\n2094323 - Workload profile should be editable in template details page\n2094405 - adding resource on enviornment isnt showing on disks list when vm is running\n2094440 - Utilization pie charts figures are not based on current data\n2094451 - PVC selection in VM creation flow does not work for non-priv user\n2094453 - CD Source selection in VM creation flow is missing Upload option\n2094465 - Typo in Source tooltip\n2094471 - Node selector modal for non-privileged user\n2094481 - Tolerations modal for non-privileged user\n2094486 - Add affinity rule modal\n2094491 - Affinity rules modal button\n2094495 - Descheduler modal has same text in two lines\n2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id\n2094665 - Dedicated Resources modal for non-privileged user\n2094678 - Secrets and ConfigMaps can\u0027t be added to Windows VM\n2094727 - Creation flow should have VM info in header row\n2094807 - hardware devices dropdown has group title even with no devices in cluster\n2094813 - Cloudinit password is seen in wizard\n2094848 - Details card on Overview page - \u0027View details\u0027 link is missing\n2095125 - OS is empty in the clone modal\n2095129 - \"undefined\" appears in rootdisk line in clone modal\n2095224 - affinity modal for non-privileged users\n2095529 - VM migration cancelation in kebab action should have shorter name\n2095530 - Column sizes in VM list view\n2095532 - Node column in VM list view is visible to non-privileged user\n2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime\n2095570 - Details tab of VM should not have Node info for non-privileged user\n2095573 - Disks created as environment or scripts should have proper label\n2095953 - VNC console controls layout\n2095955 - VNC console tabs\n2096166 - Template \"vm-template-example\" is binding with namespace \"default\"\n2096206 - Inconsistent capitalization in Template Actions\n2096208 - Templates in the catalog list is not sorted\n2096263 - Incorrectly displaying units for Disks size or Memory field in various places\n2096333 - virtualization overview, related operators title is not aligned\n2096492 - Cannot create vm from a cloned template if its boot source is edited\n2096502 - \"Restore template settings\" should be removed from template CPU editor\n2096510 - VM can be created without any disk\n2096511 - Template shows \"no Boot Source\" and label \"Source available\" at the same time\n2096620 - in templates list, edit boot reference kebab action opens a modal with different title\n2096781 - Remove boot source provider while edit boot source reference\n2096801 - vnc thumbnail in virtual machine overview should be active on page load\n2096845 - Windows template\u0027s scripts tab is crashed\n2097328 - virtctl guestfs shouldn\u0027t required uid = 0\n2097370 - missing titles for optional parameters in wizard customization page\n2097465 - Count is not updating for \u0027prometheusrule\u0027 component when metrics kubevirt_hco_out_of_band_modifications_count executed\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2098134 - \"Workload profile\" column is not showing completely in template list\n2098135 - Workload is not showing correct in catalog after change the template\u0027s workload\n2098282 - Javascript error when changing boot source of custom template to be an uploaded file\n2099443 - No \"Quick create virtualmachine\" button for template \u0027vm-template-example\u0027\n2099533 - ConsoleQuickStart for HCO CR\u0027s VM is missing\n2099535 - The cdi-uploadproxy certificate url should be opened in a new tab\n2099539 - No storage option for upload while editing a disk\n2099566 - Cloudinit should be replaced by cloud-init in all places\n2099608 - \"DynamicB\" shows in vm-example disk size\n2099633 - Doc links needs to be updated\n2099639 - Remove user line from the ssh command section\n2099802 - Details card link shouldn\u0027t be hard-coded\n2100054 - Windows VM with WSL2 guest fails to migrate\n2100284 - Virtualization overview is crashed\n2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101485 - Cloudinit should be replaced by cloud-init in all places\n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer\n2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2102122 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2102124 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102127 - Cannot add NIC to VM template as non-priv user\n2102129 - All templates are labeling \"source available\" in template list page\n2102131 - The number of hardware devices is not correct in vm overview tab\n2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2102143 - vm clone modal is displaying DV size instead of PVC size\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102543 - Add button moved to right\n2102544 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102545 - VM filter has two \"Other\" checkboxes which are triggered together\n2104617 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106258 - All pages are crashed after visit Virtualization -\u003e Overview\n2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions\n2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111562 - kubevirt plugin console crashed after visit vmi page\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2022-2068" }, { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "169435" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168387" }, { "db": "PACKETSTORM", "id": "168182" }, { "db": "PACKETSTORM", "id": "168282" }, { "db": "PACKETSTORM", "id": "170165" }, { "db": "PACKETSTORM", "id": "169668" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "168392" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-2068", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-22-319-01", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2022-2068", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169435", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168150", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168387", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168182", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168282", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170165", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169668", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168352", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170179", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168392", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "169435" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168387" }, { "db": "PACKETSTORM", "id": "168182" }, { "db": "PACKETSTORM", "id": "168282" }, { "db": "PACKETSTORM", "id": "170165" }, { "db": "PACKETSTORM", "id": "169668" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "id": "VAR-202206-1428", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.416330645 }, "last_update_date": "2024-11-29T22:02:07.602000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Debian Security Advisories: DSA-5169-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350" }, { "title": "Amazon Linux AMI: ALAS-2022-1626", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1626" }, { "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-2068" }, { "title": "Amazon Linux 2: ALAS2-2022-1832", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1832" }, { "title": "Amazon Linux 2: ALAS2-2022-1831", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1831" }, { "title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALASOPENSSL-SNAPSAFE-2023-001" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-2068" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228917 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228913 - Security Advisory" }, { "title": "Red Hat: Moderate: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225818 - Security Advisory" }, { "title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235982 - Security Advisory" }, { "title": "Red Hat: Moderate: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226224 - Security Advisory" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226517 - Security Advisory" }, { "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226184 - Security Advisory" }, { "title": "Red Hat: Important: Satellite 6.11.5.6 async security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235980 - Security Advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-123", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-123" }, { "title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235979 - Security Advisory" }, { "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226422 - Security Advisory" }, { "title": "Brocade Security Advisories: Access Denied", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226103 - Security Advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-195", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-195" }, { "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226188 - Security Advisory" }, { "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226182 - Security Advisory" }, { "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226051 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226283 - Security Advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226183 - Security Advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226507 - Security Advisory" }, { "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227055 - Security Advisory" }, { "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227058 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228840 - Security Advisory" }, { "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226024 - Security Advisory" }, { "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226714 - Security Advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226290 - Security Advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226348 - Security Advisory" }, { "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226345 - Security Advisory" }, { "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228841 - Security Advisory" }, { "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226346 - Security Advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226430 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226370 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226271 - Security Advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226696 - Security Advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory" }, { "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228750 - Security Advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230408 - Security Advisory" }, { "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228889 - Security Advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228781 - Security Advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory" }, { "title": "Smart Check Scan-Report", "trust": 0.1, "url": "https://github.com/mawinkler/c1-cs-scan-result " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/Vulnerability_checker " }, { "title": "https://github.com/jntass/TASSL-1.1.1", "trust": 0.1, "url": "https://github.com/jntass/TASSL-1.1.1 " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/CVE_checks " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/OpenSSL-CVE-lib " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-78", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://www.openssl.org/news/secadv/20220621.txt" }, { "trust": 1.2, "url": "https://www.debian.org/security/2022/dsa-5169" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220707-0008/" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.6, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-34903" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-37434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/78.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://github.com/backloop-biz/vulnerability_checker" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2022-1626.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7055" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0391" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2015-20107" }, { "trust": 0.1, "url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6517" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41103" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41103" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6184" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32148" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30630" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1705" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28131" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30633" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30632" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30633" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6345" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1962" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40674" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32207" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40303" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42916" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22721" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-35252" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28614" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26377" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23943" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32221" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8559" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0686" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0639" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6429" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3515" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21628" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0924" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0908" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1055" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35527" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26373" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-20368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1048" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-39399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0854" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29581" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2078" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42898" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21499" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42003" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27405" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0561" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27406" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21626" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28390" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27950" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27404" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30002" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1184" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25255" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42004" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1355" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28893" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1798" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "169435" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168387" }, { "db": "PACKETSTORM", "id": "168182" }, { "db": "PACKETSTORM", "id": "168282" }, { "db": "PACKETSTORM", "id": "170165" }, { "db": "PACKETSTORM", "id": "169668" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "169435" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168387" }, { "db": "PACKETSTORM", "id": "168182" }, { "db": "PACKETSTORM", "id": "168282" }, { "db": "PACKETSTORM", "id": "170165" }, { "db": "PACKETSTORM", "id": "169668" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-06-21T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2022-10-20T14:19:18", "db": "PACKETSTORM", "id": "169435" }, { "date": "2022-08-25T15:22:18", "db": "PACKETSTORM", "id": "168150" }, { "date": "2022-09-15T14:18:16", "db": "PACKETSTORM", "id": "168387" }, { "date": "2022-08-25T15:29:18", "db": "PACKETSTORM", "id": "168182" }, { "date": "2022-09-07T16:56:15", "db": "PACKETSTORM", "id": "168282" }, { "date": "2022-12-08T21:28:21", "db": "PACKETSTORM", "id": "170165" }, { "date": "2022-06-21T12:12:12", "db": "PACKETSTORM", "id": "169668" }, { "date": "2022-09-13T15:42:14", "db": "PACKETSTORM", "id": "168352" }, { "date": "2022-12-09T14:52:40", "db": "PACKETSTORM", "id": "170179" }, { "date": "2022-09-15T14:20:18", "db": "PACKETSTORM", "id": "168392" }, { "date": "2022-06-21T15:15:09.060000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2023-11-07T03:46:11.177000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2022-7055-01", "sources": [ { "db": "PACKETSTORM", "id": "169435" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow, code execution", "sources": [ { "db": "PACKETSTORM", "id": "170165" } ], "trust": 0.1 } }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.