Search criteria

4 vulnerabilities found for a250 by netapp

VAR-202203-0005

Vulnerability from variot - Updated: 2025-12-22 22:11

The BN_mod_sqrt() function, which computes a modular square root, contains a bug that can cause it to loop forever for non-prime moduli. Internally this function is used when parsing certificates that contain elliptic curve public keys in compressed form or explicit elliptic curve parameters with a base point encoded in compressed form. It is possible to trigger the infinite loop by crafting a certificate that has invalid explicit curve parameters. Since certificate parsing happens prior to verification of the certificate signature, any process that parses an externally supplied certificate may thus be subject to a denial of service attack. The infinite loop can also be reached when parsing crafted private keys as they can contain explicit elliptic curve parameters. Thus vulnerable situations include: - TLS clients consuming server certificates - TLS servers consuming client certificates - Hosting providers taking certificates or private keys from customers - Certificate authorities parsing certification requests from subscribers - Anything else which parses ASN.1 elliptic curve parameters Also any other applications that use the BN_mod_sqrt() where the attacker can control the parameter values are vulnerable to this DoS issue. In the OpenSSL 1.0.2 version the public key is not parsed during initial parsing of the certificate which makes it slightly harder to trigger the infinite loop. However any operation which requires the public key from the certificate will trigger the infinite loop. In particular the attacker can use a self-signed certificate to trigger the loop during verification of the certificate signature. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0. It was addressed in the releases of 1.1.1n and 3.0.2 on the 15th March 2022. Fixed in OpenSSL 3.0.2 (Affected 3.0.0,3.0.1). Fixed in OpenSSL 1.1.1n (Affected 1.1.1-1.1.1m). Fixed in OpenSSL 1.0.2zd (Affected 1.0.2-1.0.2zc). OpenSSL Project Than, OpenSSL Security Advisory [15 March 2022] Has been published. Severity − High ( Severity: High ) OpenSSL of BN_mod_sqrt() Computes the square root in a finite field. BN_mod_sqrt() Has the problem of causing an infinite loop if the law is non-prime. Vulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.34 and prior and 8.0.25 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 4.4 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:H). (CVE-2021-2372) Vulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.34 and prior and 8.0.25 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 5.9 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H). (CVE-2021-2389) Vulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.35 and prior and 8.0.26 and prior. Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server as well as unauthorized update, insert or delete access to some of MySQL Server accessible data. CVSS 3.1 Base Score 5.5 (Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:L/A:H). (CVE-2021-35604) get_sort_by_table in MariaDB prior to 10.6.2 allows an application crash via certain subquery uses of ORDER BY. (CVE-2021-46657) save_window_function_values in MariaDB prior to 10.6.3 allows an application crash because of incorrect handling of with_window_func=true for a subquery. (CVE-2021-46658) MariaDB prior to 10.7.2 allows an application crash because it does not recognize that SELECT_LEX::nest_level is local to each VIEW. (CVE-2021-46659) MariaDB up to and including 10.5.9 allows an application crash in find_field_in_tables and find_order_in_list via an unused common table expression (CTE). (CVE-2021-46661) MariaDB up to and including 10.5.9 allows a set_var.cc application crash via certain uses of an UPDATE statement in conjunction with a nested subquery. (CVE-2021-46662) MariaDB up to and including 10.5.13 allows a ha_maria::extra application crash via certain SELECT statements. (CVE-2021-46663) MariaDB up to and including 10.5.9 allows an application crash in sub_select_postjoin_aggr for a NULL value of aggr. (CVE-2021-46664) MariaDB up to and including 10.5.9 allows a sql_parse.cc application crash because of incorrect used_tables expectations. (CVE-2021-46665) MariaDB prior to 10.6.2 allows an application crash because of mishandling of a pushdown from a HAVING clause to a WHERE clause. (CVE-2021-46666) An integer overflow vulnerability was found in MariaDB, where an invalid size of ref_pointer_array is allocated. This issue results in a denial of service. (CVE-2021-46667) MariaDB up to and including 10.5.9 allows an application crash via certain long SELECT DISTINCT statements that improperly interact with storage-engine resource limitations for temporary data structures. (CVE-2021-46668) A use-after-free vulnerability was found in MariaDB. This flaw allows malicious users to trigger a convert_const_to_int() use-after-free when the BIGINT data type is used, resulting in a denial of service. (CVE-2022-0778) (CVE-2022-0778) Vulnerability in the MySQL Server product of Oracle MySQL (component: C API). Supported versions that are affected are 5.7.36 and prior and 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 4.4 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:H). (CVE-2022-21595) MariaDB CONNECT Storage Engine Stack-based Buffer Overflow Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of proper validation of the length of user-supplied data prior to copying it to a fixed-length stack-based buffer. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16191. (CVE-2022-24048) MariaDB CONNECT Storage Engine Use-After-Free Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16207. (CVE-2022-24050) MariaDB CONNECT Storage Engine Format String Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of proper validation of a user-supplied string before using it as a format specifier. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16193. (CVE-2022-24051) A flaw was found in MariaDB. Lack of input validation leads to a heap buffer overflow. This flaw allows an authenticated, local attacker with at least a low level of privileges to submit a crafted SQL query to MariaDB and escalate their privileges to the level of the MariaDB service user, running arbitrary code. (CVE-2022-24052) MariaDB Server v10.6.5 and below exists to contain an use-after-free in the component Item_args::walk_arg, which is exploited via specially crafted SQL statements. (CVE-2022-27376) MariaDB Server v10.6.3 and below exists to contain an use-after-free in the component Item_func_in::cleanup(), which is exploited via specially crafted SQL statements. (CVE-2022-27377) An issue in the component Create_tmp_table::finalize of MariaDB Server v10.7 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27378) An issue in the component Arg_comparator::compare_real_fixed of MariaDB Server v10.6.2 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27379) An issue in the component my_decimal::operator= of MariaDB Server v10.6.3 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27380) An issue in the component Field::set_default of MariaDB Server v10.6 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27381) MariaDB Server v10.7 and below exists to contain a segmentation fault via the component Item_field::used_tables/update_depend_map_for_order. (CVE-2022-27382) MariaDB Server v10.6 and below exists to contain an use-after-free in the component my_strcasecmp_8bit, which is exploited via specially crafted SQL statements. (CVE-2022-27383) An issue in the component Item_subselect::init_expr_cache_tracker of MariaDB Server v10.6 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27384) An issue in the component Used_tables_and_const_cache::used_tables_and_const_cache_join of MariaDB Server v10.7 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27385) MariaDB Server v10.7 and below exists to contain a segmentation fault via the component sql/sql_class.cc. (CVE-2022-27386) MariaDB Server v10.7 and below exists to contain a global buffer overflow in the component decimal_bin_size, which is exploited via specially crafted SQL statements. (CVE-2022-27387) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_subselect.cc. (CVE-2022-27444) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/sql_window.cc. (CVE-2022-27445) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_cmpfunc.h. (CVE-2022-27446) MariaDB Server v10.9 and below exists to contain a use-after-free via the component Binary_string::free_buffer() at /sql/sql_string.h. (CVE-2022-27447) There is an Assertion failure in MariaDB Server v10.9 and below via 'node->pcur->rel_pos == BTR_PCUR_ON' at /row/row0mysql.cc. (CVE-2022-27448) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_func.cc:148. (CVE-2022-27449) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/field_conv.cc. (CVE-2022-27451) MariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_cmpfunc.cc. (CVE-2022-27452) MariaDB Server v10.6.3 and below exists to contain an use-after-free in the component my_wildcmp_8bit_impl at /strings/ctype-simple.c. (CVE-2022-27455) MariaDB Server v10.6.3 and below exists to contain an use-after-free in the component VDec::VDec at /sql/sql_type.cc. (CVE-2022-27456) MariaDB Server v10.6.3 and below exists to contain an use-after-free in the component my_mb_wc_latin1 at /strings/ctype-latin1.c. (CVE-2022-27457) MariaDB Server v10.6.3 and below exists to contain an use-after-free in the component Binary_string::free_buffer() at /sql/sql_string.h. (CVE-2022-27458) MariaDB Server prior to 10.7 is vulnerable to Denial of Service. In extra/mariabackup/ds_compress.cc, when an error occurs (pthread_create returns a nonzero value) while executing the method create_worker_threads, the held lock is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31622) MariaDB Server prior to 10.7 is vulnerable to Denial of Service. In extra/mariabackup/ds_compress.cc, when an error occurs (i.e., going to the err label) while executing the method create_worker_threads, the held lock thd->ctrl_mutex is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31623) MariaDB Server prior to 10.7 is vulnerable to Denial of Service. While executing the plugin/server_audit/server_audit.c method log_statement_ex, the held lock lock_bigbuffer is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31624) MariaDB v10.4 to v10.7 exists to contain an use-after-poison in prepare_inplace_add_virtual at /storage/innobase/handler/handler0alter.cc. (CVE-2022-32081) MariaDB v10.5 to v10.7 exists to contain an assertion failure at table->get_ref_count() == 0 in dict0dict.cc. (CVE-2022-32082) MariaDB v10.2 to v10.6.1 exists to contain a segmentation fault via the component Item_subselect::init_expr_cache_tracker. (CVE-2022-32083) MariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component sub_select. (CVE-2022-32084) MariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Item_func_in::cleanup/Item::cleanup_processor. (CVE-2022-32085) MariaDB v10.4 to v10.8 exists to contain a segmentation fault via the component Item_field::fix_outer_field. (CVE-2022-32086) MariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Item_args::walk_args. (CVE-2022-32087) MariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Exec_time_tracker::get_loops/Filesort_tracker::report_use/filesort. (CVE-2022-32088) MariaDB v10.5 to v10.7 exists to contain a segmentation fault via the component st_select_lex_unit::exclude_level. (CVE-2022-32089) MariaDB v10.7 exists to contain an use-after-poison in in __interceptor_memset at /libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc. (CVE-2022-32091) In MariaDB prior to 10.9.2, compress_write in extra/mariabackup/ds_compress.cc does not release data_mutex upon a stream write failure, which allows local users to trigger a deadlock. (CVE-2022-38791). See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHBA-2022:1355

Space precludes documenting all of the container images in this advisory.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.10-x86_64

The image digest is sha256:39efe13ef67cb4449f5e6cdd8a26c83c07c6a2ce5d235dfbc3ba58c64418fcf3

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.10-s390x

The image digest is sha256:49b63b22bc221e29e804fc3cc769c6eff97c655a1f5017f429aa0dad2593a0a8

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.10-ppc64le

The image digest is sha256:0d34e1198679a500a3af7acbdfba7864565f7c4f5367ca428d34dee9a9912c9c

(For aarch64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.10-aarch64

The image digest is sha256:ddf6cb04e74ac88874793a3c0538316c9ac8ff154267984c8a4ea7047913e1db

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2050118 - 4.10: oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured 2052414 - Start last run action should contain current user name in the started-by annotation of the PLR 2054404 - ip-reconcile job is failing consistently 2054767 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added 2054808 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s 2055661 - migrate loadbalancers from amphora to ovn not working 2057881 - MetalLB: speaker metrics is not updated when deleting a service 2059347 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members 2059945 - MetalLB: Move CI config files to metallb repo from dev-scripts repo 2060362 - Openshift registry starts to segfault after S3 storage configuration 2060586 - [4.10.z] [RFE] use /dev/ptp_hyperv on Azure/AzureStack 2064204 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum 2064988 - Fix the hubUrl docs link in pipeline quicksearch modal 2065488 - ip-reconciler job does not complete, halts node drain 2065832 - oc mirror hangs when processing the Red Hat 4.10 catalog 2067311 - PPT event source is lost when received by the consumer 2067719 - Update channels information link is taking to a 404 error page 2069095 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster 2069913 - Disabling community tasks is not working 2070131 - Installation of Openshift virtualization fails with error service "hco-webhook-service" not found 2070492 - [4.10.z backport] On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain 2070525 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator 2071479 - Thanos Querier high CPU and memory usage till OOM 2072191 - [4.10] cluster storage operator AWS credentialsrequest lacks KMS privileges 2072440 - Pipeline builder makes too many (100+) API calls upfront 2072928 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-05-16-3 macOS Big Sur 11.6.6

macOS Big Sur 11.6.6 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213256.

apache Available for: macOS Big Sur Impact: Multiple issues in apache Description: Multiple issues were addressed by updating apache to version 2.4.53. CVE-2021-44224 CVE-2021-44790 CVE-2022-22719 CVE-2022-22720 CVE-2022-22721

AppKit Available for: macOS Big Sur Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team

AppleAVD Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22675: an anonymous researcher

AppleGraphicsControl Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day Initiative

AppleScript Available for: macOS Big Sur Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read issue was addressed with improved bounds checking. CVE-2022-26698: Qi Sun of Trend Micro

AppleScript Available for: macOS Big Sur Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read issue was addressed with improved input validation. CVE-2022-26697: Qi Sun and Robert Ai of Trend Micro

CoreTypes Available for: macOS Big Sur Impact: A malicious application may bypass Gatekeeper checks Description: This issue was addressed with improved checks to prevent unauthorized actions. CVE-2022-22663: Arsenii Kostromin (0x3c3e)

CVMS Available for: macOS Big Sur Impact: A malicious application may be able to gain root privileges Description: A memory initialization issue was addressed. CVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori CVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori

DriverKit Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with system privileges Description: An out-of-bounds access issue was addressed with improved bounds checking. CVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)

Graphics Drivers Available for: macOS Big Sur Impact: A local user may be able to read kernel memory Description: An out-of-bounds read issue existed that led to the disclosure of kernel memory. This was addressed with improved input validation. CVE-2022-22674: an anonymous researcher

Intel Graphics Driver Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26720: Liu Long of Ant Security Light-Year Lab

Intel Graphics Driver Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds read issue was addressed with improved input validation. CVE-2022-26770: Liu Long of Ant Security Light-Year Lab

Intel Graphics Driver Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26756: Jack Dates of RET2 Systems, Inc

Intel Graphics Driver Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26769: Antonio Zekic (@antoniozekic)

Intel Graphics Driver Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro Zero Day Initiative

IOMobileFrameBuffer Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26768: an anonymous researcher

Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-26714: Peter Nguyễn Vũ Hoàng (@peternguyen14) of STAR Labs (@starlabs_sg)

Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-26757: Ned Williamson of Google Project Zero

LaunchServices Available for: macOS Big Sur Impact: A malicious application may be able to bypass Privacy preferences Description: The issue was addressed with additional permissions checks. CVE-2022-26767: Wojciech Reguła (@_r3ggi) of SecuRing

LaunchServices Available for: macOS Big Sur Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: An access issue was addressed with additional sandbox restrictions on third-party applications. CVE-2022-26706: Arsenii Kostromin (0x3c3e)

libresolv Available for: macOS Big Sur Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-26776: Zubair Ashraf of Crowdstrike, Max Shavrick (@_mxms) of the Google Security Team

LibreSSL Available for: macOS Big Sur Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: A denial of service issue was addressed with improved input validation. CVE-2022-0778

libxml2 Available for: macOS Big Sur Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2022-23308

OpenSSL Available for: macOS Big Sur Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2022-0778

PackageKit Available for: macOS Big Sur Impact: A malicious application may be able to modify protected parts of the file system Description: This issue was addressed by removing the vulnerable code. CVE-2022-26712: Mickey Jin (@patch1t)

Printing Available for: macOS Big Sur Impact: A malicious application may be able to bypass Privacy preferences Description: This issue was addressed by removing the vulnerable code. CVE-2022-26746: @gorelics

Security Available for: macOS Big Sur Impact: A malicious app may be able to bypass signature validation Description: A certificate parsing issue was addressed with improved checks. CVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)

SMB Available for: macOS Big Sur Impact: An application may be able to gain elevated privileges Description: An out-of-bounds read issue was addressed with improved input validation. CVE-2022-26718: Peter Nguyễn Vũ Hoàng of STAR Labs

SMB Available for: macOS Big Sur Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26723: Felix Poulin-Belanger

SMB Available for: macOS Big Sur Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26715: Peter Nguyễn Vũ Hoàng of STAR Labs

SoftwareUpdate Available for: macOS Big Sur Impact: A malicious application may be able to access restricted files Description: This issue was addressed with improved entitlements. CVE-2022-26728: Mickey Jin (@patch1t)

TCC Available for: macOS Big Sur Impact: An app may be able to capture a user's screen Description: This issue was addressed with improved checks. CVE-2022-26726: an anonymous researcher

Tcl Available for: macOS Big Sur Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2022-26755: Arsenii Kostromin (0x3c3e)

Vim Available for: macOS Big Sur Impact: Multiple issues in Vim Description: Multiple issues were addressed by updating Vim. CVE-2021-4136 CVE-2021-4166 CVE-2021-4173 CVE-2021-4187 CVE-2021-4192 CVE-2021-4193 CVE-2021-46059 CVE-2022-0128

WebKit Available for: macOS Big Sur Impact: Processing a maliciously crafted mail message may lead to running arbitrary javascript Description: A validation issue was addressed with improved input sanitization. CVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu of Palo Alto Networks (paloaltonetworks.com)

Wi-Fi Available for: macOS Big Sur Impact: A malicious application may disclose restricted memory Description: A memory corruption issue was addressed with improved validation. CVE-2022-26745: an anonymous researcher

Wi-Fi Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2022-26761: Wang Yu of Cyberserval

zip Available for: macOS Big Sur Impact: Processing a maliciously crafted file may lead to a denial of service Description: A denial of service issue was addressed with improved state handling. CVE-2022-0530

zlib Available for: macOS Big Sur Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2018-25032: Tavis Ormandy

zsh Available for: macOS Big Sur Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed by updating to zsh version 5.8.1. CVE-2021-45444

Additional recognition

Bluetooth We would like to acknowledge Jann Horn of Project Zero for their assistance.

macOS Big Sur 11.6.6 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TUACgkQeC9qKD1p rhgJBg/9HpPp6P2OtFdYHigfaoga/3szMAjXC650MlC2rF1lXyTRVsO54eupz4er K8Iud3+YnDVTUKkadftWt2XdxAADGtfEFhJW584RtnWjeli+XtGEjQ8jD1/MNPJW qtnrOh2pYG9SxolKDofhiecbYxIGppRKSDRFl0/3VGFed2FIpiRDunlttHBEhHu/ vZVSFzMrNbGvhju+ZCdwFLKXOgB851aRSeo9Xkt63tSGiee7rLmVAINyFbbPwcVP yXwMvn0TNodCBn0wBWD0+iQ3UXIDIYSPaM1Z0BQxVraEhK3Owro3JKgqNbWswMvj SY0KUulbAPs3aOeyz1BI70npYA3+Qwd+bk2hxbzbU/AxvxCrsEk04QfxLYqvj0mR VZYPcup2KAAkiTeekQ5X739r8NAyaaI+bp7FllFv/Z2jVW9kGgNIFr46R05MD9NF aC1JAZtJ4VWbMEGHnHAMrOgdGaHpryvzl2BjUXRgW27vIq5uF5YiNcpjS2BezTFc R2ojiMNRB33Y44LlH7Zv3gHm4bE3+NzcGeWvBzwOsHznk9Jiv6x2eBUxkttMlPyO zymQMONQN3bktSMT8JnmJ8rlEgISONd7NeTEzuhlGIWaWNAFmmBoPnBiPk+yC3n4 d22yFs6DLp2pJ+0zOWmTcqt1xYng05Jwj4F0KT49w0TO9Up79+o= =rtPl -----END PGP SIGNATURE-----

. Bugs fixed (https://bugzilla.redhat.com/):

2066837 - CVE-2022-24769 moby: Default inheritable capabilities for linux container should be empty

  1. The updated image includes bug and security fixes. Solution:

If you are using the RHACS 3.68.1, you are advised to upgrade to patch release 3.68.2. Bugs fixed (https://bugzilla.redhat.com/):

2090957 - CVE-2022-1902 stackrox: Improper sanitization allows users to retrieve Notifier secrets from GraphQL API in plaintext

  1. JIRA issues fixed (https://issues.jboss.org/):

ROX-11391 - Release RHACS 3.68.2 ROX-9657 - Patch supported RHACS images previous to 3.69.0 release to fix RHSA-2022:0658

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: openssl security update Advisory ID: RHSA-2022:1078-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1078 Issue date: 2022-03-28 CVE Names: CVE-2022-0778 ==================================================================== 1. Summary:

An update for openssl is now available for Red Hat Enterprise Linux 7.6 Advanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update Support, and Red Hat Enterprise Linux 7.6 Update Services for SAP Solutions.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Server AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.6) - x86_64

  1. Description:

OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.

Security Fix(es):

  • openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.

  1. Package List:

Red Hat Enterprise Linux Server AUS (v. 7.6):

Source: openssl-1.0.2k-18.el7_6.src.rpm

x86_64: openssl-1.0.2k-18.el7_6.x86_64.rpm openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-devel-1.0.2k-18.el7_6.i686.rpm openssl-devel-1.0.2k-18.el7_6.x86_64.rpm openssl-libs-1.0.2k-18.el7_6.i686.rpm openssl-libs-1.0.2k-18.el7_6.x86_64.rpm

Red Hat Enterprise Linux Server E4S (v. 7.6):

Source: openssl-1.0.2k-18.el7_6.src.rpm

ppc64le: openssl-1.0.2k-18.el7_6.ppc64le.rpm openssl-debuginfo-1.0.2k-18.el7_6.ppc64le.rpm openssl-devel-1.0.2k-18.el7_6.ppc64le.rpm openssl-libs-1.0.2k-18.el7_6.ppc64le.rpm

x86_64: openssl-1.0.2k-18.el7_6.x86_64.rpm openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-devel-1.0.2k-18.el7_6.i686.rpm openssl-devel-1.0.2k-18.el7_6.x86_64.rpm openssl-libs-1.0.2k-18.el7_6.i686.rpm openssl-libs-1.0.2k-18.el7_6.x86_64.rpm

Red Hat Enterprise Linux Server TUS (v. 7.6):

Source: openssl-1.0.2k-18.el7_6.src.rpm

x86_64: openssl-1.0.2k-18.el7_6.x86_64.rpm openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-devel-1.0.2k-18.el7_6.i686.rpm openssl-devel-1.0.2k-18.el7_6.x86_64.rpm openssl-libs-1.0.2k-18.el7_6.i686.rpm openssl-libs-1.0.2k-18.el7_6.x86_64.rpm

Red Hat Enterprise Linux Server Optional AUS (v. 7.6):

x86_64: openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-perl-1.0.2k-18.el7_6.x86_64.rpm openssl-static-1.0.2k-18.el7_6.i686.rpm openssl-static-1.0.2k-18.el7_6.x86_64.rpm

Red Hat Enterprise Linux Server Optional E4S (v. 7.6):

ppc64le: openssl-debuginfo-1.0.2k-18.el7_6.ppc64le.rpm openssl-perl-1.0.2k-18.el7_6.ppc64le.rpm openssl-static-1.0.2k-18.el7_6.ppc64le.rpm

x86_64: openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-perl-1.0.2k-18.el7_6.x86_64.rpm openssl-static-1.0.2k-18.el7_6.i686.rpm openssl-static-1.0.2k-18.el7_6.x86_64.rpm

Red Hat Enterprise Linux Server Optional TUS (v. 7.6):

x86_64: openssl-debuginfo-1.0.2k-18.el7_6.i686.rpm openssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm openssl-perl-1.0.2k-18.el7_6.x86_64.rpm openssl-static-1.0.2k-18.el7_6.i686.rpm openssl-static-1.0.2k-18.el7_6.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. Summary:

Red Hat OpenShift Virtualization release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements. Description:

OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform.

This advisory contains the following OpenShift Virtualization 4.11.0 images:

RHEL-8-CNV-4.11 ==============hostpath-provisioner-container-v4.11.0-21 kubevirt-tekton-tasks-operator-container-v4.11.0-29 kubevirt-template-validator-container-v4.11.0-17 bridge-marker-container-v4.11.0-26 hostpath-csi-driver-container-v4.11.0-21 cluster-network-addons-operator-container-v4.11.0-26 ovs-cni-marker-container-v4.11.0-26 virtio-win-container-v4.11.0-16 ovs-cni-plugin-container-v4.11.0-26 kubemacpool-container-v4.11.0-26 hostpath-provisioner-operator-container-v4.11.0-24 cnv-containernetworking-plugins-container-v4.11.0-26 kubevirt-ssp-operator-container-v4.11.0-54 virt-cdi-uploadserver-container-v4.11.0-59 virt-cdi-cloner-container-v4.11.0-59 virt-cdi-operator-container-v4.11.0-59 virt-cdi-importer-container-v4.11.0-59 virt-cdi-uploadproxy-container-v4.11.0-59 virt-cdi-controller-container-v4.11.0-59 virt-cdi-apiserver-container-v4.11.0-59 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7 kubevirt-tekton-tasks-copy-template-container-v4.11.0-7 checkup-framework-container-v4.11.0-67 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7 vm-network-latency-checkup-container-v4.11.0-67 kubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7 hyperconverged-cluster-webhook-container-v4.11.0-95 cnv-must-gather-container-v4.11.0-62 hyperconverged-cluster-operator-container-v4.11.0-95 kubevirt-console-plugin-container-v4.11.0-83 virt-controller-container-v4.11.0-105 virt-handler-container-v4.11.0-105 virt-operator-container-v4.11.0-105 virt-launcher-container-v4.11.0-105 virt-artifacts-server-container-v4.11.0-105 virt-api-container-v4.11.0-105 libguestfs-tools-container-v4.11.0-105 hco-bundle-registry-container-v4.11.0-587

Security Fix(es):

  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)

  • kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)

  • golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)

  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)

  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)

  • golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)

  • golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)

  • golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)

  • golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)

  • golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)

  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)

  • golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):

1937609 - VM cannot be restarted 1945593 - Live migration should be blocked for VMs with host devices 1968514 - [RFE] Add cancel migration action to virtctl 1993109 - CNV MacOS Client not signed 1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side 2001385 - no "name" label in virt-operator pod 2009793 - KBase to clarify nested support status is missing 2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate 2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin) 2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation 2026357 - Migration in sequence can be reported as failed even when it succeeded 2029349 - cluster-network-addons-operator does not serve metrics through HTTPS 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2031857 - Add annotation for URL to download the image 2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate 2035344 - kubemacpool-mac-controller-manager not ready 2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered 2039976 - Pod stuck in "Terminating" state when removing VM with kernel boot and container disks 2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI 2041467 - [SSP] Support custom DataImportCron creating in custom namespaces 2042402 - LiveMigration with postcopy misbehave when failure occurs 2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists 2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift? 2051899 - 4.11.0 containers 2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn't configure ip nat rules 2052466 - Event does not include reason for inability to live migrate 2052689 - Overhead Memory consumption calculations are incorrect 2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements 2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString 2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control 2056467 - virt-template-validator pods getting scheduled on the same node 2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long 2057310 - qemu-guest-agent does not report information due to selinux denials 2058149 - cluster-network-addons-operator deployment's MULTUS_IMAGE is pointing to brew image 2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs 2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state 2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool 2060585 - [SNO] Failed to find the virt-controller leader pod 2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. 2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource 2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace 2063792 - No DataImportCron for CentOS 7 2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2064936 - Migration of vm from VMware reports pvc not large enough 2065014 - Feature Highlights in CNV 4.10 contains links to 4.7 2065019 - "Running VMs per template" in the new overview tab counts VMs that are not running 2066768 - [CNV-4.11-HCO] User Cannot List Resource "namespaces" in API group 2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom 2069287 - Two annotations for VM Template provider name 2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error 2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2070864 - non-privileged user cannot see catalog tiles 2071488 - "Migrate Node to Node" is confusing. 2071549 - [rhel-9] unable to create a non-root virt-launcher based VM 2071611 - Metrics documentation generators are missing metrics/recording rules 2071921 - Kubevirt RPM is not being built 2073669 - [rhel-9] VM fails to start 2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream 2073982 - [CNV-4.11-RHEL9] 'virtctl' binary fails with 'rc1' with 'virtctl version' command 2074337 - VM created from registry cannot be started 2075200 - VLAN filtering cannot be configured with Intel X710 2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff 2076292 - Upgrade from 4.10.1->4.11 using nightly channel, is not completing with error "could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR" 2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file 2076790 - Alert SSPDown is constantly in Firing state 2076908 - clicking on a template in the Running VMs per Template card leads to 404 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2078700 - Windows template boot source should be blank 2078703 - [RFE] Please hide the user defined password when customizing cloud-init 2078709 - VM conditions column have wrong key/values 2078728 - Common template rootDisk is not named correctly 2079366 - rootdisk is not able to edit 2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM 2079783 - Actions are broken in topology view 2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck 2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod 2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop 2080833 - Missing cloud init script editor in the scripts tab 2080835 - SSH key is set using cloud init script instead of new api 2081182 - VM SSH command generated by UI points at api VIP 2081202 - cloud-init for Windows VM generated with corrupted "undefined" section 2081409 - when viewing a common template details page, user need to see the message "can't edit common template" on all tabs 2081671 - SSH service created outside the UI is not discoverable 2081831 - [RFE] Improve disk hotplug UX 2082008 - LiveMigration fails due to loss of connection to destination host 2082164 - Migration progress timeout expects absolute progress 2082912 - [CNV-4.11] HCO Being Unable to Reconcile State 2083093 - VM overview tab is crashed 2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows? 2083100 - Something keeps loading in the ?node selector? modal 2083101 - ?Restore default settings? never become available while editing CPU/Memory 2083135 - VM fails to schedule with vTPM in spec 2083256 - SSP Reconcile logging improvement when CR resources are changed 2083595 - [RFE] Disable VM descheduler if the VM is not live migratable 2084102 - [e2e] Many elements are lacking proper selector like 'data-test-id' or 'data-test' 2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails 2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field 2084431 - User credentials for ssh is not in correct format 2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. 2084532 - Console is crashed while detaching disk 2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory) 2085320 - Tolerations rules is not adding correctly 2085322 - Not able to stop/restart VM if the VM is staying in "Starting" 2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode 2086278 - Cloud init script edit add " hostname='' " when is should not be added 2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode 2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode 2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode 2086294 - [dark mode] Can't see the number inside the donut chart in VMs per template card 2086303 - non-priv user can't create VM when namespace is not selected 2086479 - some modals use ?Save? and some modals use ?Submit? 2086486 - cluster overview getting started card include old information 2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend 2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard 2086803 - When clonnig a template we need to update vm labels and annotaions to match new template 2086825 - VM restore PVC uses exact source PVC request size 2086849 - Create from YAML example is not runnable 2087188 - When VM is stopped - adding disk failed to show 2087189 - When VM is stopped - adding disk failed to show 2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed 2087546 - "Quick Starts" is missing in Getting started card 2087547 - Activity and Status card are missing in Virtualization Overview 2087559 - template in "VMs per template" should take user to vm list page 2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists 2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines? 2087577 - "VMs per template" load time is a bit long 2087578 - Terminology "VM" should be "Virtual Machine" in all places 2087582 - Remove VMI and MTV from the navigation 2087583 - [RFE] Show more info about boot source in template list 2087584 - Template provider should not be mandatory 2087587 - Improve the descriptive text in the kebab menu of template 2087589 - Red icons shows in storage disk source selection without a good reason 2087590 - [REF] "Upload a new file to a PVC" should not open the form in a new tab 2087593 - "Boot method" is not a good name in overview tab 2087603 - Align details card for single VM overview with the design doc 2087616 - align the utilization card of single VM overview with the design 2087701 - [RFE] Missing a link to VMI from running VM details page 2087717 - Message when editing template boot source is wrong 2088034 - Virtualization Overview crashes when a VirtualMachine has no labels 2088355 - disk modal shows all storage classes as default 2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user 2088379 - Create VM from catalog does not respect the storageclass of the template's boot source 2088407 - Missing create button in the template list 2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context 2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11 2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error 2088849 - "dataimportcrontemplate.kubevirt.io/enable" field does not do any validation 2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco 2089271 - Virtualization appears twice in sidebar 2089327 - add network modal crash when no networks available 2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page 2089477 - [RFE] Allow upload source when adding VM disk 2089700 - Drive column in Disks card of Overview page has duplicated values 2089745 - When removing all disks from customize wizard app crashes 2089789 - Add windows drivers disk is missing when template is not windows 2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user 2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages 2089840 - Cant create snapshot if VM is without disks 2089877 - Utilization card on single VM overview - timespan menu lacks 5min option 2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update 2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics 2089954 - Details card on single VM overview - VNC console has grey padding 2089963 - Details card on single VM overview - Operating system info is not available 2089967 - Network Interfaces card on single VM overview - name tooltip lacks info 2089970 - Network Interfaces card on single VM overview - IP tooltip 2089972 - Disks card on single VM overview -typo 2089979 - Single VM Details - CPU|Memory edit icon misplaced 2089982 - Single VM Details - SSH modal has redundant VM name 2090035 - Alert card is missing in single VM overview 2090036 - OS should be "Operating system" and host should be "hostname" in single vm overview 2090037 - Add template link in single vm overview details card 2090038 - The update field under the version in overview should be consistent with the operator page 2090042 - Move the edit button close to the text for "boot order" and "ssh access" 2090043 - "No resource selected" in vm boot order 2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page 2090048 - "Boot mode" should be editable while VM is running 2090054 - Services ?kubernetes" and "openshift" should not be listing in vm details 2090055 - Add link to vm template in vm details page 2090056 - "Something went wrong" shows on VM "Environment" tab 2090057 - "?" icon is too big in environment and disk tab 2090059 - Failed to add configmap in environment tab due to validate error 2090064 - Miss "remote desktop" in console dropdown list for windows VM 2090066 - [RFE] Improve guest login credentials 2090068 - Make the "name" and "Source" column wider in vm disk tab 2090131 - Key's value in "add affinity rule" modal is too small 2090350 - memory leak in virt-launcher process 2091003 - SSH service is not deleted along the VM 2091058 - After VM gets deleted, the user is redirected to a page with a different namespace 2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec. 2091406 - wrong template namespace label when creating a vm with wizard 2091754 - Scheduling and scripts tab should be editable while the VM is running 2091755 - Change bottom "Save" to "Apply" on cloud-init script form 2091756 - The root disk of cloned template should be editable 2091758 - "OS" should be "Operating system" in template filter 2091760 - The provider should be empty if it's not set during cloning 2091761 - Miss "Edit labels" and "Edit annotations" in template kebab button 2091762 - Move notification above the tabs in template details page 2091764 - Clone a template should lead to the template details 2091765 - "Edit bootsource" is keeping in load in template actions dropdown 2091766 - "Are you sure you want to leave this page?" pops up when click the "Templates" link 2091853 - On Snapshot tab of single VM "Restore" button should move to the kebab actions together with the Delete 2091863 - BootSource edit modal should list affected templates 2091868 - Catalog list view has two columns named "BootSource" 2091889 - Devices should be editable for customize template 2091897 - username is missing in the generated ssh command 2091904 - VM is not started if adding "Authorized SSH Key" during vm creation 2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root 2091940 - SSH is not enabled in vm details after restart the VM 2091945 - delete a template should lead to templates list 2091946 - Add disk modal shows wrong units 2091982 - Got a lot of "Reconciler error" in cdi-deployment log after adding custom DataImportCron to hco 2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank 2092052 - Virtualization should be omitted in Calatog breadcrumbs 2092071 - Getting started card in Virtualization overview can not be hidden. 2092079 - Error message stays even when problematic field is dismissed 2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO 2092228 - Ensure Machine Type for new VMs is 8.6 2092230 - [RFE] Add indication/mark to deprecated template 2092306 - VM is stucking with WaitingForVolumeBinding if creating via "Boot from CD" 2092337 - os is empty in VM details page 2092359 - [e2e] data-test-id includes all pvc name 2092654 - [RFE] No obvious way to delete the ssh key from the VM 2092662 - No url example for rhel and windows template 2092663 - no hyperlink for URL example in disk source "url" 2092664 - no hyperlink to the cdi uploadproxy URL 2092781 - Details card should be removed for non admins. 2092783 - Top consumers' card should be removed for non admins. 2092787 - Operators links should be removed from Getting started card 2092789 - "Learn more about Operators" link should lead to the Red Hat documentation 2092951 - ?Edit BootSource? action should have more explicit information when disabled 2093282 - Remove links to 'all-namespaces/' for non-privileged user 2093691 - Creation flow drawer left padding is broken 2093713 - Required fields in creation flow should be highlighted if empty 2093715 - Optional parameters section in creation flow is missing bottom padding 2093716 - CPU|Memory modal button should say "Restore template settings? 2093772 - Add a service in environment it reminds a pending change in boot order 2093773 - Console crashed if adding a service without serial number 2093866 - Cannot create vm from the template vm-template-example 2093867 - OS for template 'vm-template-example' should matching the version of the image 2094202 - Cloud-init username field should have hint 2094207 - Cloud-init password field should have auto-generate option 2094208 - SSH key input is missing validation 2094217 - YAML view should reflect shanges in SSH form 2094222 - "?" icon should be placed after red asterisk in required fields 2094323 - Workload profile should be editable in template details page 2094405 - adding resource on enviornment isnt showing on disks list when vm is running 2094440 - Utilization pie charts figures are not based on current data 2094451 - PVC selection in VM creation flow does not work for non-priv user 2094453 - CD Source selection in VM creation flow is missing Upload option 2094465 - Typo in Source tooltip 2094471 - Node selector modal for non-privileged user 2094481 - Tolerations modal for non-privileged user 2094486 - Add affinity rule modal 2094491 - Affinity rules modal button 2094495 - Descheduler modal has same text in two lines 2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id 2094665 - Dedicated Resources modal for non-privileged user 2094678 - Secrets and ConfigMaps can't be added to Windows VM 2094727 - Creation flow should have VM info in header row 2094807 - hardware devices dropdown has group title even with no devices in cluster 2094813 - Cloudinit password is seen in wizard 2094848 - Details card on Overview page - 'View details' link is missing 2095125 - OS is empty in the clone modal 2095129 - "undefined" appears in rootdisk line in clone modal 2095224 - affinity modal for non-privileged users 2095529 - VM migration cancelation in kebab action should have shorter name 2095530 - Column sizes in VM list view 2095532 - Node column in VM list view is visible to non-privileged user 2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime 2095570 - Details tab of VM should not have Node info for non-privileged user 2095573 - Disks created as environment or scripts should have proper label 2095953 - VNC console controls layout 2095955 - VNC console tabs 2096166 - Template "vm-template-example" is binding with namespace "default" 2096206 - Inconsistent capitalization in Template Actions 2096208 - Templates in the catalog list is not sorted 2096263 - Incorrectly displaying units for Disks size or Memory field in various places 2096333 - virtualization overview, related operators title is not aligned 2096492 - Cannot create vm from a cloned template if its boot source is edited 2096502 - "Restore template settings" should be removed from template CPU editor 2096510 - VM can be created without any disk 2096511 - Template shows "no Boot Source" and label "Source available" at the same time 2096620 - in templates list, edit boot reference kebab action opens a modal with different title 2096781 - Remove boot source provider while edit boot source reference 2096801 - vnc thumbnail in virtual machine overview should be active on page load 2096845 - Windows template's scripts tab is crashed 2097328 - virtctl guestfs shouldn't required uid = 0 2097370 - missing titles for optional parameters in wizard customization page 2097465 - Count is not updating for 'prometheusrule' component when metrics kubevirt_hco_out_of_band_modifications_count executed 2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP 2098134 - "Workload profile" column is not showing completely in template list 2098135 - Workload is not showing correct in catalog after change the template's workload 2098282 - Javascript error when changing boot source of custom template to be an uploaded file 2099443 - No "Quick create virtualmachine" button for template 'vm-template-example' 2099533 - ConsoleQuickStart for HCO CR's VM is missing 2099535 - The cdi-uploadproxy certificate url should be opened in a new tab 2099539 - No storage option for upload while editing a disk 2099566 - Cloudinit should be replaced by cloud-init in all places 2099608 - "DynamicB" shows in vm-example disk size 2099633 - Doc links needs to be updated 2099639 - Remove user line from the ssh command section 2099802 - Details card link shouldn't be hard-coded 2100054 - Windows VM with WSL2 guest fails to migrate 2100284 - Virtualization overview is crashed 2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode 2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP 2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page 2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user 2101485 - Cloudinit should be replaced by cloud-init in all places 2101628 - non-priv user cannot load dataSource while edit template's rootdisk 2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer 2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page 2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id 2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id 2102122 - non-priv user cannot load dataSource while edit template's rootdisk 2102124 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user 2102125 - vm clone modal is displaying DV size instead of PVC size 2102127 - Cannot add NIC to VM template as non-priv user 2102129 - All templates are labeling "source available" in template list page 2102131 - The number of hardware devices is not correct in vm overview tab 2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode 2102143 - vm clone modal is displaying DV size instead of PVC size 2102256 - Add button moved to right 2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal 2102543 - Add button moved to right 2102544 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal 2102545 - VM filter has two "Other" checkboxes which are triggered together 2104617 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed 2106175 - All pages are crashed after visit Virtualization -> Overview 2106258 - All pages are crashed after visit Virtualization -> Overview 2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions 2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics 2111562 - kubevirt plugin console crashed after visit vmi page 2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs

  1. Bugs fixed (https://bugzilla.redhat.com/):

2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled 2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled 2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server

5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0005",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "ucosminexus primary server base",
        "scope": null,
        "trust": 1.6,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "ucosminexus application server",
        "scope": null,
        "trust": 1.6,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "ucosminexus service platform",
        "scope": null,
        "trust": 1.6,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.15.0"
      },
      {
        "model": "nessus",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "10.0.0"
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.2.42"
      },
      {
        "model": "santricity smi-s provider",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "nessus",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "10.1.2"
      },
      {
        "model": "node.js",
        "scope": "gt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1n"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "12.13.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2zd"
      },
      {
        "model": "a250",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "nessus",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "8.15.4"
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.7.2"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.7.0"
      },
      {
        "model": "clustered data ontap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.2"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.14.0"
      },
      {
        "model": "node.js",
        "scope": "gt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "17.0.0"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.4.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "12.12.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.19.1"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.2.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      },
      {
        "model": "storagegrid",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.6.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "12.22.11"
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.5.14"
      },
      {
        "model": "clustered data ontap antivirus connector",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "cloud volumes ontap mediator",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.3.33"
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.4.23"
      },
      {
        "model": "mariadb",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.6.6"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "17.7.2"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.5.0"
      },
      {
        "model": "node.js",
        "scope": "gt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "12.0.0"
      },
      {
        "model": "mariadb",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "mariadb",
        "version": "10.3.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.14.2"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "500f",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "neoface monitor",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "jp1/automatic job management system 3",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "ucosminexus application server-r",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "ucosminexus developer",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "mission critical mail",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "jp1/base",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "\u65e5\u7acb\u9ad8\u4fe1\u983c\u30b5\u30fc\u30d0 rv3000",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "nec \u30a8\u30c3\u30b8\u30b2\u30fc\u30c8\u30a6\u30a7\u30a4",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "\u65e5\u7acb\u30a2\u30c9\u30d0\u30f3\u30b9\u30c8\u30b5\u30fc\u30d0 ha8000v \u30b7\u30ea\u30fc\u30ba",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "esmpro/serveragentservice",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "connexive application platform",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "webotx application server",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "ucosminexus service architect",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "univerge",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "cosminexus http server",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "webotx sip application server",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "connexive pf",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "esmpro/serveragent",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "istoragemanager express",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "jp1/file transmission server/ftp",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "actsecure \u30dd\u30fc\u30bf\u30eb",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "iot \u5171\u901a\u57fa\u76e4",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "simpwright",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "nec enhanced video analytics",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "ism\u30b5\u30fc\u30d0",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "jp1/performance management",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "openssl",
        "scope": null,
        "trust": 0.8,
        "vendor": "openssl",
        "version": null
      },
      {
        "model": "nec ai accelerator",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "ix \u30eb\u30fc\u30bf",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "witchymail",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "istoragemanager",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      },
      {
        "model": "nec cyber security platform",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "PACKETSTORM",
        "id": "167225"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2022-0778",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2022-0778",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-0778",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2022-0778",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-0778",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-0778",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-0778",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The BN_mod_sqrt() function, which computes a modular square root, contains a bug that can cause it to loop forever for non-prime moduli. Internally this function is used when parsing certificates that contain elliptic curve public keys in compressed form or explicit elliptic curve parameters with a base point encoded in compressed form. It is possible to trigger the infinite loop by crafting a certificate that has invalid explicit curve parameters. Since certificate parsing happens prior to verification of the certificate signature, any process that parses an externally supplied certificate may thus be subject to a denial of service attack. The infinite loop can also be reached when parsing crafted private keys as they can contain explicit elliptic curve parameters. Thus vulnerable situations include: - TLS clients consuming server certificates - TLS servers consuming client certificates - Hosting providers taking certificates or private keys from customers - Certificate authorities parsing certification requests from subscribers - Anything else which parses ASN.1 elliptic curve parameters Also any other applications that use the BN_mod_sqrt() where the attacker can control the parameter values are vulnerable to this DoS issue. In the OpenSSL 1.0.2 version the public key is not parsed during initial parsing of the certificate which makes it slightly harder to trigger the infinite loop. However any operation which requires the public key from the certificate will trigger the infinite loop. In particular the attacker can use a self-signed certificate to trigger the loop during verification of the certificate signature. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0. It was addressed in the releases of 1.1.1n and 3.0.2 on the 15th March 2022. Fixed in OpenSSL 3.0.2 (Affected 3.0.0,3.0.1). Fixed in OpenSSL 1.1.1n (Affected 1.1.1-1.1.1m). Fixed in OpenSSL 1.0.2zd (Affected 1.0.2-1.0.2zc). OpenSSL Project Than, OpenSSL Security Advisory [15 March 2022] Has been published. Severity \u2212 High ( Severity: High ) OpenSSL of BN_mod_sqrt() Computes the square root in a finite field. BN_mod_sqrt() Has the problem of causing an infinite loop if the law is non-prime. Vulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.34 and prior and 8.0.25 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 4.4 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:H). (CVE-2021-2372)\nVulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.34 and prior and 8.0.25 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 5.9 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H). (CVE-2021-2389)\nVulnerability in the MySQL Server product of Oracle MySQL (component: InnoDB). Supported versions that are affected are 5.7.35 and prior and 8.0.26 and prior. Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server as well as unauthorized update, insert or delete access to some of MySQL Server accessible data. CVSS 3.1 Base Score 5.5 (Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:L/A:H). (CVE-2021-35604)\nget_sort_by_table in MariaDB prior to 10.6.2 allows an application crash via certain subquery uses of ORDER BY. (CVE-2021-46657)\nsave_window_function_values in MariaDB prior to 10.6.3 allows an application crash because of incorrect handling of with_window_func=true for a subquery. (CVE-2021-46658)\nMariaDB prior to 10.7.2 allows an application crash because it does not recognize that SELECT_LEX::nest_level is local to each VIEW. (CVE-2021-46659)\nMariaDB up to and including 10.5.9 allows an application crash in find_field_in_tables and find_order_in_list via an unused common table expression (CTE). (CVE-2021-46661)\nMariaDB up to and including 10.5.9 allows a set_var.cc application crash via certain uses of an UPDATE statement in conjunction with a nested subquery. (CVE-2021-46662)\nMariaDB up to and including 10.5.13 allows a ha_maria::extra application crash via certain SELECT statements. (CVE-2021-46663)\nMariaDB up to and including 10.5.9 allows an application crash in sub_select_postjoin_aggr for a NULL value of aggr. (CVE-2021-46664)\nMariaDB up to and including 10.5.9 allows a sql_parse.cc application crash because of incorrect used_tables expectations. (CVE-2021-46665)\nMariaDB prior to 10.6.2 allows an application crash because of mishandling of a pushdown from a HAVING clause to a WHERE clause. (CVE-2021-46666)\nAn integer overflow vulnerability was found in MariaDB, where an invalid size of ref_pointer_array is allocated. This issue results in a denial of service. (CVE-2021-46667)\nMariaDB up to and including 10.5.9 allows an application crash via certain long SELECT DISTINCT statements that improperly interact with storage-engine resource limitations for temporary data structures. (CVE-2021-46668)\nA use-after-free vulnerability was found in MariaDB. This flaw allows malicious users to trigger a convert_const_to_int() use-after-free when the BIGINT data type is used, resulting in a denial of service. (CVE-2022-0778) (CVE-2022-0778)\nVulnerability in the MySQL Server product of Oracle MySQL (component: C API). Supported versions that are affected are 5.7.36 and prior and 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. Successful attacks of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. CVSS 3.1 Base Score 4.4 (Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:H). (CVE-2022-21595)\nMariaDB CONNECT Storage Engine Stack-based Buffer Overflow Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of proper validation of the length of user-supplied data prior to copying it to a fixed-length stack-based buffer. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16191. (CVE-2022-24048)\nMariaDB CONNECT Storage Engine Use-After-Free Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16207. (CVE-2022-24050)\nMariaDB CONNECT Storage Engine Format String Privilege Escalation Vulnerability. This vulnerability allows local malicious users to escalate privileges on affected installations of MariaDB. Authentication is required to exploit this vulnerability. The specific flaw exists within the processing of SQL queries. The issue results from the lack of proper validation of a user-supplied string before using it as a format specifier. An attacker can leverage this vulnerability to escalate privileges and execute arbitrary code in the context of the service account. Was ZDI-CAN-16193. (CVE-2022-24051)\nA flaw was found in MariaDB. Lack of input validation leads to a heap buffer overflow. This flaw allows an authenticated, local attacker with at least a low level of privileges to submit a crafted SQL query to MariaDB and escalate their privileges to the level of the MariaDB service user, running arbitrary code. (CVE-2022-24052)\nMariaDB Server v10.6.5 and below exists to contain an use-after-free in the component Item_args::walk_arg, which is exploited via specially crafted SQL statements. (CVE-2022-27376)\nMariaDB Server v10.6.3 and below exists to contain an use-after-free in the component Item_func_in::cleanup(), which is exploited via specially crafted SQL statements. (CVE-2022-27377)\nAn issue in the component Create_tmp_table::finalize of MariaDB Server v10.7 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27378)\nAn issue in the component Arg_comparator::compare_real_fixed of MariaDB Server v10.6.2 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27379)\nAn issue in the component my_decimal::operator= of MariaDB Server v10.6.3 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27380)\nAn issue in the component Field::set_default of MariaDB Server v10.6 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27381)\nMariaDB Server v10.7 and below exists to contain a segmentation fault via the component Item_field::used_tables/update_depend_map_for_order. (CVE-2022-27382)\nMariaDB Server v10.6 and below exists to contain an use-after-free in the component my_strcasecmp_8bit, which is exploited via specially crafted SQL statements. (CVE-2022-27383)\nAn issue in the component Item_subselect::init_expr_cache_tracker of MariaDB Server v10.6 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27384)\nAn issue in the component Used_tables_and_const_cache::used_tables_and_const_cache_join of MariaDB Server v10.7 and below exists to allow malicious users to cause a Denial of Service (DoS) via specially crafted SQL statements. (CVE-2022-27385)\nMariaDB Server v10.7 and below exists to contain a segmentation fault via the component sql/sql_class.cc. (CVE-2022-27386)\nMariaDB Server v10.7 and below exists to contain a global buffer overflow in the component decimal_bin_size, which is exploited via specially crafted SQL statements. (CVE-2022-27387)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_subselect.cc. (CVE-2022-27444)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/sql_window.cc. (CVE-2022-27445)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_cmpfunc.h. (CVE-2022-27446)\nMariaDB Server v10.9 and below exists to contain a use-after-free via the component Binary_string::free_buffer() at /sql/sql_string.h. (CVE-2022-27447)\nThere is an Assertion failure in MariaDB Server v10.9 and below via \u0027node-\u0026gt;pcur-\u0026gt;rel_pos == BTR_PCUR_ON\u0027 at /row/row0mysql.cc. (CVE-2022-27448)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_func.cc:148. (CVE-2022-27449)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/field_conv.cc. (CVE-2022-27451)\nMariaDB Server v10.9 and below exists to contain a segmentation fault via the component sql/item_cmpfunc.cc. (CVE-2022-27452)\nMariaDB Server v10.6.3 and below exists to contain an use-after-free in the component my_wildcmp_8bit_impl at /strings/ctype-simple.c. (CVE-2022-27455)\nMariaDB Server v10.6.3 and below exists to contain an use-after-free in the component VDec::VDec at /sql/sql_type.cc. (CVE-2022-27456)\nMariaDB Server v10.6.3 and below exists to contain an use-after-free in the component my_mb_wc_latin1 at /strings/ctype-latin1.c. (CVE-2022-27457)\nMariaDB Server v10.6.3 and below exists to contain an use-after-free in the component Binary_string::free_buffer() at /sql/sql_string.h. (CVE-2022-27458)\nMariaDB Server prior to 10.7 is vulnerable to Denial of Service. In extra/mariabackup/ds_compress.cc, when an error occurs (pthread_create returns a nonzero value) while executing the method create_worker_threads, the held lock is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31622)\nMariaDB Server prior to 10.7 is vulnerable to Denial of Service. In extra/mariabackup/ds_compress.cc, when an error occurs (i.e., going to the err label) while executing the method create_worker_threads, the held lock thd-\u0026gt;ctrl_mutex is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31623)\nMariaDB Server prior to 10.7 is vulnerable to Denial of Service. While executing the plugin/server_audit/server_audit.c method log_statement_ex, the held lock lock_bigbuffer is not released correctly, which allows local users to trigger a denial of service due to the deadlock. (CVE-2022-31624)\nMariaDB v10.4 to v10.7 exists to contain an use-after-poison in prepare_inplace_add_virtual at /storage/innobase/handler/handler0alter.cc. (CVE-2022-32081)\nMariaDB v10.5 to v10.7 exists to contain an assertion failure at table-\u0026gt;get_ref_count() == 0 in dict0dict.cc. (CVE-2022-32082)\nMariaDB v10.2 to v10.6.1 exists to contain a segmentation fault via the component Item_subselect::init_expr_cache_tracker. (CVE-2022-32083)\nMariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component sub_select. (CVE-2022-32084)\nMariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Item_func_in::cleanup/Item::cleanup_processor. (CVE-2022-32085)\nMariaDB v10.4 to v10.8 exists to contain a segmentation fault via the component Item_field::fix_outer_field. (CVE-2022-32086)\nMariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Item_args::walk_args. (CVE-2022-32087)\nMariaDB v10.2 to v10.7 exists to contain a segmentation fault via the component Exec_time_tracker::get_loops/Filesort_tracker::report_use/filesort. (CVE-2022-32088)\nMariaDB v10.5 to v10.7 exists to contain a segmentation fault via the component st_select_lex_unit::exclude_level. (CVE-2022-32089)\nMariaDB v10.7 exists to contain an use-after-poison in in __interceptor_memset at /libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc. (CVE-2022-32091)\nIn MariaDB prior to 10.9.2, compress_write in extra/mariabackup/ds_compress.cc does not release data_mutex upon a stream write failure, which allows local users to trigger a deadlock. (CVE-2022-38791). See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:1355\n\nSpace precludes documenting all of the container images in this advisory. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.10-x86_64\n\nThe image digest is\nsha256:39efe13ef67cb4449f5e6cdd8a26c83c07c6a2ce5d235dfbc3ba58c64418fcf3\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.10-s390x\n\nThe image digest is\nsha256:49b63b22bc221e29e804fc3cc769c6eff97c655a1f5017f429aa0dad2593a0a8\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.10-ppc64le\n\nThe image digest is\nsha256:0d34e1198679a500a3af7acbdfba7864565f7c4f5367ca428d34dee9a9912c9c\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.10-aarch64\n\nThe image digest is\nsha256:ddf6cb04e74ac88874793a3c0538316c9ac8ff154267984c8a4ea7047913e1db\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2050118 - 4.10: oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2052414 - Start last run action should contain current user name in the started-by annotation of the PLR\n2054404 - ip-reconcile job is failing consistently\n2054767 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2054808 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2055661 - migrate loadbalancers from amphora to ovn not working\n2057881 - MetalLB: speaker metrics is not updated when deleting a service\n2059347 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2059945 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060362 - Openshift registry starts to segfault after S3 storage configuration\n2060586 - [4.10.z] [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2064204 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2064988 - Fix the hubUrl docs link in pipeline quicksearch modal\n2065488 - ip-reconciler job does not complete, halts node drain\n2065832 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2067311 - PPT event source is lost when received by the consumer\n2067719 - Update channels information link is taking to a 404 error page\n2069095 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2069913 - Disabling community tasks is not working\n2070131 - Installation of Openshift virtualization fails with error service \"hco-webhook-service\" not found\n2070492 - [4.10.z backport] On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2070525 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2071479 - Thanos Querier high CPU and memory usage till OOM\n2072191 - [4.10] cluster storage operator AWS credentialsrequest lacks KMS privileges\n2072440 - Pipeline builder makes too many (100+) API calls upfront\n2072928 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-05-16-3 macOS Big Sur 11.6.6\n\nmacOS Big Sur 11.6.6 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213256. \n\napache\nAvailable for: macOS Big Sur\nImpact: Multiple issues in apache\nDescription: Multiple issues were addressed by updating apache to\nversion 2.4.53. \nCVE-2021-44224\nCVE-2021-44790\nCVE-2022-22719\nCVE-2022-22720\nCVE-2022-22721\n\nAppKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleAVD\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges. Apple is aware of a report that this issue may\nhave been actively exploited. \nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22675: an anonymous researcher\n\nAppleGraphicsControl\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day\nInitiative\n\nAppleScript\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read issue was addressed with improved\nbounds checking. \nCVE-2022-26698: Qi Sun of Trend Micro\n\nAppleScript\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read issue was addressed with improved\ninput validation. \nCVE-2022-26697: Qi Sun and Robert Ai of Trend Micro\n\nCoreTypes\nAvailable for: macOS Big Sur\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: This issue was addressed with improved checks to prevent\nunauthorized actions. \nCVE-2022-22663: Arsenii Kostromin (0x3c3e)\n\nCVMS\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to gain root privileges\nDescription: A memory initialization issue was addressed. \nCVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori\nCVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori\n\nDriverKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges\nDescription: An out-of-bounds access issue was addressed with\nimproved bounds checking. \nCVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)\n\nGraphics Drivers\nAvailable for: macOS Big Sur\nImpact: A local user may be able to read kernel memory\nDescription: An out-of-bounds read issue existed that led to the\ndisclosure of kernel memory. This was addressed with improved input\nvalidation. \nCVE-2022-22674: an anonymous researcher\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26720: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds read issue was addressed with improved\ninput validation. \nCVE-2022-26770: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26756: Jack Dates of RET2 Systems, Inc\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26769: Antonio Zekic (@antoniozekic)\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro\nZero Day Initiative\n\nIOMobileFrameBuffer\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26768: an anonymous researcher\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26714: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng (@peternguyen14) of STAR Labs\n(@starlabs_sg)\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-26757: Ned Williamson of Google Project Zero\n\nLaunchServices\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to bypass Privacy\npreferences\nDescription: The issue was addressed with additional permissions\nchecks. \nCVE-2022-26767: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nLaunchServices\nAvailable for: macOS Big Sur\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: An access issue was addressed with additional sandbox\nrestrictions on third-party applications. \nCVE-2022-26706: Arsenii Kostromin (0x3c3e)\n\nlibresolv\nAvailable for: macOS Big Sur\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-26776: Zubair Ashraf of Crowdstrike, Max Shavrick (@_mxms)\nof the Google Security Team\n\nLibreSSL\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: A denial of service issue was addressed with improved\ninput validation. \nCVE-2022-0778\n\nlibxml2\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-23308\n\nOpenSSL\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: This issue was addressed with improved checks. \nCVE-2022-0778\n\nPackageKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2022-26712: Mickey Jin (@patch1t)\n\nPrinting\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to bypass Privacy\npreferences\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2022-26746: @gorelics\n\nSecurity\nAvailable for: macOS Big Sur\nImpact: A malicious app may be able to bypass signature validation\nDescription: A certificate parsing issue was addressed with improved\nchecks. \nCVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)\n\nSMB\nAvailable for: macOS Big Sur\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds read issue was addressed with improved\ninput validation. \nCVE-2022-26718: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng of STAR Labs\n\nSMB\nAvailable for: macOS Big Sur\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26723: Felix Poulin-Belanger\n\nSMB\nAvailable for: macOS Big Sur\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26715: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng of STAR Labs\n\nSoftwareUpdate\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to access restricted\nfiles\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-26728: Mickey Jin (@patch1t)\n\nTCC\nAvailable for: macOS Big Sur\nImpact: An app may be able to capture a user\u0027s screen\nDescription: This issue was addressed with improved checks. \nCVE-2022-26726: an anonymous researcher\n\nTcl\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2022-26755: Arsenii Kostromin (0x3c3e)\n\nVim\nAvailable for: macOS Big Sur\nImpact: Multiple issues in Vim\nDescription: Multiple issues were addressed by updating Vim. \nCVE-2021-4136\nCVE-2021-4166\nCVE-2021-4173\nCVE-2021-4187\nCVE-2021-4192\nCVE-2021-4193\nCVE-2021-46059\nCVE-2022-0128\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted mail message may lead to\nrunning arbitrary javascript\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu\nof Palo Alto Networks (paloaltonetworks.com)\n\nWi-Fi\nAvailable for: macOS Big Sur\nImpact: A malicious application may disclose restricted memory\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26745: an anonymous researcher\n\nWi-Fi\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2022-26761: Wang Yu of Cyberserval\n\nzip\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted file may lead to a denial of\nservice\nDescription: A denial of service issue was addressed with improved\nstate handling. \nCVE-2022-0530\n\nzlib\nAvailable for: macOS Big Sur\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2018-25032: Tavis Ormandy\n\nzsh\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed by updating to zsh version\n5.8.1. \nCVE-2021-45444\n\nAdditional recognition\n\nBluetooth\nWe would like to acknowledge Jann Horn of Project Zero for their\nassistance. \n\nmacOS Big Sur 11.6.6 may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TUACgkQeC9qKD1p\nrhgJBg/9HpPp6P2OtFdYHigfaoga/3szMAjXC650MlC2rF1lXyTRVsO54eupz4er\nK8Iud3+YnDVTUKkadftWt2XdxAADGtfEFhJW584RtnWjeli+XtGEjQ8jD1/MNPJW\nqtnrOh2pYG9SxolKDofhiecbYxIGppRKSDRFl0/3VGFed2FIpiRDunlttHBEhHu/\nvZVSFzMrNbGvhju+ZCdwFLKXOgB851aRSeo9Xkt63tSGiee7rLmVAINyFbbPwcVP\nyXwMvn0TNodCBn0wBWD0+iQ3UXIDIYSPaM1Z0BQxVraEhK3Owro3JKgqNbWswMvj\nSY0KUulbAPs3aOeyz1BI70npYA3+Qwd+bk2hxbzbU/AxvxCrsEk04QfxLYqvj0mR\nVZYPcup2KAAkiTeekQ5X739r8NAyaaI+bp7FllFv/Z2jVW9kGgNIFr46R05MD9NF\naC1JAZtJ4VWbMEGHnHAMrOgdGaHpryvzl2BjUXRgW27vIq5uF5YiNcpjS2BezTFc\nR2ojiMNRB33Y44LlH7Zv3gHm4bE3+NzcGeWvBzwOsHznk9Jiv6x2eBUxkttMlPyO\nzymQMONQN3bktSMT8JnmJ8rlEgISONd7NeTEzuhlGIWaWNAFmmBoPnBiPk+yC3n4\nd22yFs6DLp2pJ+0zOWmTcqt1xYng05Jwj4F0KT49w0TO9Up79+o=\n=rtPl\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2066837 - CVE-2022-24769 moby: Default inheritable capabilities for linux container should be empty\n\n5. The updated image includes bug and security fixes. Solution:\n\nIf you are using the RHACS 3.68.1, you are advised to upgrade to patch\nrelease 3.68.2. Bugs fixed (https://bugzilla.redhat.com/):\n\n2090957 - CVE-2022-1902 stackrox: Improper sanitization allows users to retrieve Notifier secrets from GraphQL API in plaintext\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nROX-11391 - Release RHACS 3.68.2\nROX-9657 - Patch supported RHACS images previous to 3.69.0 release to fix RHSA-2022:0658\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: openssl security update\nAdvisory ID:       RHSA-2022:1078-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1078\nIssue date:        2022-03-28\nCVE Names:         CVE-2022-0778\n====================================================================\n1. Summary:\n\nAn update for openssl is now available for Red Hat Enterprise Linux 7.6\nAdvanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.6 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.6) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. \n\n5. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6):\n\nSource:\nopenssl-1.0.2k-18.el7_6.src.rpm\n\nx86_64:\nopenssl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-devel-1.0.2k-18.el7_6.i686.rpm\nopenssl-devel-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-libs-1.0.2k-18.el7_6.i686.rpm\nopenssl-libs-1.0.2k-18.el7_6.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.6):\n\nSource:\nopenssl-1.0.2k-18.el7_6.src.rpm\n\nppc64le:\nopenssl-1.0.2k-18.el7_6.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.ppc64le.rpm\nopenssl-devel-1.0.2k-18.el7_6.ppc64le.rpm\nopenssl-libs-1.0.2k-18.el7_6.ppc64le.rpm\n\nx86_64:\nopenssl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-devel-1.0.2k-18.el7_6.i686.rpm\nopenssl-devel-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-libs-1.0.2k-18.el7_6.i686.rpm\nopenssl-libs-1.0.2k-18.el7_6.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.6):\n\nSource:\nopenssl-1.0.2k-18.el7_6.src.rpm\n\nx86_64:\nopenssl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-devel-1.0.2k-18.el7_6.i686.rpm\nopenssl-devel-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-libs-1.0.2k-18.el7_6.i686.rpm\nopenssl-libs-1.0.2k-18.el7_6.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-perl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-static-1.0.2k-18.el7_6.i686.rpm\nopenssl-static-1.0.2k-18.el7_6.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6):\n\nppc64le:\nopenssl-debuginfo-1.0.2k-18.el7_6.ppc64le.rpm\nopenssl-perl-1.0.2k-18.el7_6.ppc64le.rpm\nopenssl-static-1.0.2k-18.el7_6.ppc64le.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-perl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-static-1.0.2k-18.el7_6.i686.rpm\nopenssl-static-1.0.2k-18.el7_6.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-18.el7_6.i686.rpm\nopenssl-debuginfo-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-perl-1.0.2k-18.el7_6.x86_64.rpm\nopenssl-static-1.0.2k-18.el7_6.i686.rpm\nopenssl-static-1.0.2k-18.el7_6.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Summary:\n\nRed Hat OpenShift Virtualization release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. \n\nThis advisory contains the following OpenShift Virtualization 4.11.0\nimages:\n\nRHEL-8-CNV-4.11\n==============hostpath-provisioner-container-v4.11.0-21\nkubevirt-tekton-tasks-operator-container-v4.11.0-29\nkubevirt-template-validator-container-v4.11.0-17\nbridge-marker-container-v4.11.0-26\nhostpath-csi-driver-container-v4.11.0-21\ncluster-network-addons-operator-container-v4.11.0-26\novs-cni-marker-container-v4.11.0-26\nvirtio-win-container-v4.11.0-16\novs-cni-plugin-container-v4.11.0-26\nkubemacpool-container-v4.11.0-26\nhostpath-provisioner-operator-container-v4.11.0-24\ncnv-containernetworking-plugins-container-v4.11.0-26\nkubevirt-ssp-operator-container-v4.11.0-54\nvirt-cdi-uploadserver-container-v4.11.0-59\nvirt-cdi-cloner-container-v4.11.0-59\nvirt-cdi-operator-container-v4.11.0-59\nvirt-cdi-importer-container-v4.11.0-59\nvirt-cdi-uploadproxy-container-v4.11.0-59\nvirt-cdi-controller-container-v4.11.0-59\nvirt-cdi-apiserver-container-v4.11.0-59\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.0-7\ncheckup-framework-container-v4.11.0-67\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7\nvm-network-latency-checkup-container-v4.11.0-67\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7\nhyperconverged-cluster-webhook-container-v4.11.0-95\ncnv-must-gather-container-v4.11.0-62\nhyperconverged-cluster-operator-container-v4.11.0-95\nkubevirt-console-plugin-container-v4.11.0-83\nvirt-controller-container-v4.11.0-105\nvirt-handler-container-v4.11.0-105\nvirt-operator-container-v4.11.0-105\nvirt-launcher-container-v4.11.0-105\nvirt-artifacts-server-container-v4.11.0-105\nvirt-api-container-v4.11.0-105\nlibguestfs-tools-container-v4.11.0-105\nhco-bundle-registry-container-v4.11.0-587\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937609 - VM cannot be restarted\n1945593 - Live migration should be blocked for VMs with host devices\n1968514 - [RFE] Add cancel migration action to virtctl\n1993109 - CNV MacOS Client not signed\n1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side\n2001385 - no \"name\" label in virt-operator pod\n2009793 - KBase to clarify nested support status is missing\n2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate\n2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)\n2025401 - [TEST ONLY]  [CNV+OCS/ODF]  Virtualization poison pill implemenation\n2026357 - Migration in sequence can be reported as failed even when it succeeded\n2029349 - cluster-network-addons-operator does not serve metrics through HTTPS\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2031857 - Add annotation for URL to download the image\n2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2035344 - kubemacpool-mac-controller-manager not ready\n2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered\n2039976 - Pod stuck in \"Terminating\" state when removing VM with kernel boot and container disks\n2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI\n2041467 - [SSP] Support custom DataImportCron creating in custom namespaces\n2042402 - LiveMigration with postcopy misbehave when failure occurs\n2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists\n2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?\n2051899 - 4.11.0 containers\n2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn\u0027t configure ip nat rules\n2052466 - Event does not include reason for inability to live migrate\n2052689 - Overhead Memory consumption calculations are incorrect\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056467 - virt-template-validator pods getting scheduled on the same node\n2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long\n2057310 - qemu-guest-agent does not report information due to selinux denials\n2058149 - cluster-network-addons-operator deployment\u0027s MULTUS_IMAGE is pointing to brew image\n2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs\n2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state\n2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool\n2060585 - [SNO] Failed to find the virt-controller leader pod\n2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. \n2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource\n2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace\n2063792 - No DataImportCron for CentOS 7\n2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2064936 - Migration of vm from VMware reports pvc not large enough\n2065014 - Feature Highlights in CNV 4.10 contains links to 4.7\n2065019 - \"Running VMs per template\" in the new overview tab counts VMs that are not running\n2066768 - [CNV-4.11-HCO] User Cannot List Resource \"namespaces\" in API group\n2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom\n2069287 - Two annotations for VM Template provider name\n2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2070864 - non-privileged user cannot see catalog tiles\n2071488 - \"Migrate Node to Node\" is confusing. \n2071549 - [rhel-9] unable to create a non-root virt-launcher based VM\n2071611 - Metrics documentation generators are missing metrics/recording rules\n2071921 - Kubevirt RPM is not being built\n2073669 - [rhel-9] VM fails to start\n2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream\n2073982 - [CNV-4.11-RHEL9] \u0027virtctl\u0027 binary fails with \u0027rc1\u0027 with \u0027virtctl version\u0027 command\n2074337 - VM created from registry cannot be started\n2075200 - VLAN filtering cannot be configured with Intel X710\n2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff\n2076292 - Upgrade from 4.10.1-\u003e4.11 using nightly channel, is not completing with error \"could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR\"\n2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file\n2076790 - Alert SSPDown is constantly in Firing state\n2076908 - clicking on a template in the Running VMs per Template card leads to 404\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2078700 - Windows template boot source should be blank\n2078703 - [RFE] Please hide the user defined password when customizing cloud-init\n2078709 - VM conditions column have wrong key/values\n2078728 - Common template rootDisk is not named correctly\n2079366 - rootdisk is not able to edit\n2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM\n2079783 - Actions are broken in topology view\n2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck\n2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod\n2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop\n2080833 - Missing cloud init script editor in the scripts tab\n2080835 - SSH key is set using cloud init script instead of new api\n2081182 - VM SSH command generated by UI points at api VIP\n2081202 - cloud-init for Windows VM generated with corrupted \"undefined\" section\n2081409 - when viewing a common template details page, user need to see the message \"can\u0027t edit common template\" on all tabs\n2081671 - SSH service created outside the UI is not discoverable\n2081831 - [RFE] Improve disk hotplug UX\n2082008 - LiveMigration fails due to loss of connection to destination host\n2082164 - Migration progress timeout expects absolute progress\n2082912 - [CNV-4.11] HCO Being Unable to Reconcile State\n2083093 - VM overview tab is crashed\n2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?\n2083100 - Something keeps loading in the ?node selector? modal\n2083101 - ?Restore default settings? never become available while editing CPU/Memory\n2083135 - VM fails to schedule with vTPM in spec\n2083256 - SSP Reconcile logging improvement when CR resources are changed\n2083595 - [RFE] Disable VM descheduler if the VM is not live migratable\n2084102 - [e2e] Many elements are lacking proper selector like \u0027data-test-id\u0027 or \u0027data-test\u0027\n2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails\n2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field\n2084431 - User credentials for ssh is not in correct format\n2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. \n2084532 - Console is crashed while detaching disk\n2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory)\n2085320 - Tolerations rules is not adding correctly\n2085322 - Not able to stop/restart VM if the VM is staying in \"Starting\"\n2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode\n2086278 - Cloud init script edit add \" hostname=\u0027\u0027 \" when is should not be added\n2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode\n2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode\n2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode\n2086294 - [dark mode] Can\u0027t see the number inside the donut chart in VMs per template card\n2086303 - non-priv user can\u0027t create VM when namespace is not selected\n2086479 - some modals use ?Save? and some modals use ?Submit?\n2086486 - cluster overview getting started card include old information\n2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend\n2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard\n2086803 - When clonnig a template we need to update vm labels and annotaions to match new template\n2086825 - VM restore PVC uses exact source PVC request size\n2086849 - Create from YAML example is not runnable\n2087188 - When VM is stopped - adding disk failed to show\n2087189 - When VM is stopped - adding disk failed to show\n2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2087546 - \"Quick Starts\" is missing in Getting started card\n2087547 - Activity and Status card are missing in Virtualization Overview\n2087559 - template in \"VMs per template\" should take user to vm list page\n2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists\n2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines?\n2087577 - \"VMs per template\" load time is a bit long\n2087578 - Terminology \"VM\" should be \"Virtual Machine\" in all places\n2087582 - Remove VMI and MTV from the navigation\n2087583 - [RFE] Show more info about boot source in template list\n2087584 - Template provider should not be mandatory\n2087587 - Improve the descriptive text in the kebab menu of template\n2087589 - Red icons shows in storage disk source selection without a good reason\n2087590 - [REF] \"Upload a new file to a PVC\" should not open the form in a new tab\n2087593 - \"Boot method\" is not a good name in overview tab\n2087603 - Align details card for single VM overview with the design doc\n2087616 - align the utilization card of single VM overview with the design\n2087701 - [RFE] Missing a link to VMI from running VM details page\n2087717 - Message when editing template boot source is wrong\n2088034 - Virtualization Overview crashes when a VirtualMachine has no labels\n2088355 - disk modal shows all storage classes as default\n2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2088379 - Create VM from catalog does not respect the storageclass of the template\u0027s boot source\n2088407 - Missing create button in the template list\n2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context\n2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11\n2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error\n2088849 - \"dataimportcrontemplate.kubevirt.io/enable\" field does not do any validation\n2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco\n2089271 - Virtualization appears twice in sidebar\n2089327 - add network modal crash when no networks available\n2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page\n2089477 - [RFE] Allow upload source when adding VM disk\n2089700 - Drive column in Disks card of Overview page has duplicated values\n2089745 - When removing all disks from customize wizard app crashes\n2089789 - Add windows drivers disk is missing when template is not windows\n2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user\n2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages\n2089840 - Cant create snapshot if VM is without disks\n2089877 - Utilization card on single VM overview - timespan menu lacks 5min option\n2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update\n2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics\n2089954 - Details card on single VM overview - VNC console has grey padding\n2089963 - Details card on single VM overview - Operating system info is not available\n2089967 - Network Interfaces card on single VM overview - name tooltip lacks info\n2089970 - Network Interfaces card on single VM overview - IP tooltip\n2089972 - Disks card on single VM overview -typo\n2089979 - Single VM Details - CPU|Memory edit icon misplaced\n2089982 - Single VM Details - SSH modal has redundant VM name\n2090035 - Alert card is missing in single VM overview\n2090036 - OS should be \"Operating system\" and host should be \"hostname\" in single vm overview\n2090037 - Add template link in single vm overview details card\n2090038 - The update field under the version in overview should be consistent with the operator page\n2090042 - Move the edit button close to the text for \"boot order\" and \"ssh access\"\n2090043 - \"No resource selected\" in vm boot order\n2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page\n2090048 - \"Boot mode\" should be editable while VM is running\n2090054 - Services ?kubernetes\" and \"openshift\" should not be listing in vm details\n2090055 - Add link to vm template in vm details page\n2090056 - \"Something went wrong\" shows on VM \"Environment\" tab\n2090057 - \"?\" icon is too big in environment and disk tab\n2090059 - Failed to add configmap in environment tab due to validate error\n2090064 - Miss \"remote desktop\" in console dropdown list for windows VM\n2090066 - [RFE] Improve guest login credentials\n2090068 - Make the \"name\" and \"Source\" column wider in vm disk tab\n2090131 - Key\u0027s value in \"add affinity rule\" modal is too small\n2090350 - memory leak in virt-launcher process\n2091003 - SSH service is not deleted along the VM\n2091058 - After VM gets deleted, the user is redirected to a page with a different namespace\n2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec. \n2091406 - wrong template namespace label when creating a vm with wizard\n2091754 - Scheduling and scripts tab should be editable while the VM is running\n2091755 - Change bottom \"Save\" to \"Apply\" on cloud-init script form\n2091756 - The root disk of cloned template should be editable\n2091758 - \"OS\" should be \"Operating system\" in template filter\n2091760 - The provider should be empty if it\u0027s not set during cloning\n2091761 - Miss \"Edit labels\" and \"Edit annotations\" in template kebab button\n2091762 - Move notification above the tabs in template details page\n2091764 - Clone a template should lead to the template details\n2091765 - \"Edit bootsource\" is keeping in load in template actions dropdown\n2091766 - \"Are you sure you want to leave this page?\" pops up when click the \"Templates\" link\n2091853 - On Snapshot tab of single VM \"Restore\" button should move to the kebab actions together with the Delete\n2091863 - BootSource edit modal should list affected templates\n2091868 - Catalog list view has two columns named \"BootSource\"\n2091889 - Devices should be editable for customize template\n2091897 - username is missing in the generated ssh command\n2091904 - VM is not started if adding \"Authorized SSH Key\" during vm creation\n2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root\n2091940 - SSH is not enabled in vm details after restart the VM\n2091945 - delete a template should lead to templates list\n2091946 - Add disk modal shows wrong units\n2091982 - Got a lot of \"Reconciler error\" in cdi-deployment log after adding custom DataImportCron to hco\n2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank\n2092052 - Virtualization should be omitted in Calatog breadcrumbs\n2092071 - Getting started card in Virtualization overview can not be hidden. \n2092079 - Error message stays even when problematic field is dismissed\n2092158 - PrometheusRule  kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO\n2092228 - Ensure Machine Type for new VMs is 8.6\n2092230 - [RFE] Add indication/mark to deprecated template\n2092306 - VM is stucking with WaitingForVolumeBinding if creating via \"Boot from CD\"\n2092337 - os is empty in VM details page\n2092359 - [e2e] data-test-id includes all pvc name\n2092654 - [RFE] No obvious way to delete the ssh key from the VM\n2092662 - No url example for rhel and windows template\n2092663 - no hyperlink for URL example in disk source \"url\"\n2092664 - no hyperlink to the cdi uploadproxy URL\n2092781 - Details card should be removed for non admins. \n2092783 - Top consumers\u0027 card should be removed for non admins. \n2092787 - Operators links should be removed from Getting started card\n2092789 - \"Learn more about Operators\" link should lead to the Red Hat documentation\n2092951 - ?Edit BootSource? action should have more explicit information when disabled\n2093282 - Remove links to \u0027all-namespaces/\u0027 for non-privileged user\n2093691 - Creation flow drawer left padding is broken\n2093713 - Required fields in creation flow should be highlighted if empty\n2093715 - Optional parameters section in creation flow is missing bottom padding\n2093716 - CPU|Memory modal button should say \"Restore template settings?\n2093772 - Add a service in environment it reminds a pending change in boot order\n2093773 - Console crashed if adding a service without serial number\n2093866 - Cannot create vm from the template `vm-template-example`\n2093867 - OS for template \u0027vm-template-example\u0027 should matching the version of the image\n2094202 - Cloud-init username field should have hint\n2094207 - Cloud-init password field should have auto-generate option\n2094208 - SSH key input is missing validation\n2094217 - YAML view should reflect shanges in SSH form\n2094222 - \"?\" icon should be placed after red asterisk in required fields\n2094323 - Workload profile should be editable in template details page\n2094405 - adding resource on enviornment isnt showing on disks list when vm is running\n2094440 - Utilization pie charts figures are not based on current data\n2094451 - PVC selection in VM creation flow does not work for non-priv user\n2094453 - CD Source selection in VM creation flow is missing Upload option\n2094465 - Typo in Source tooltip\n2094471 - Node selector modal for non-privileged user\n2094481 - Tolerations modal for non-privileged user\n2094486 - Add affinity rule modal\n2094491 - Affinity rules modal button\n2094495 - Descheduler modal has same text in two lines\n2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id\n2094665 - Dedicated Resources modal for non-privileged user\n2094678 - Secrets and ConfigMaps can\u0027t be added to Windows VM\n2094727 - Creation flow should have VM info in header row\n2094807 - hardware devices dropdown has group title even with no devices in cluster\n2094813 - Cloudinit password is seen in wizard\n2094848 - Details card on Overview page - \u0027View details\u0027 link is missing\n2095125 - OS is empty in the clone modal\n2095129 - \"undefined\" appears in rootdisk line in clone modal\n2095224 - affinity modal for non-privileged users\n2095529 - VM migration cancelation in kebab action should have shorter name\n2095530 - Column sizes in VM list view\n2095532 - Node column in VM list view is visible to non-privileged user\n2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime\n2095570 - Details tab of VM should not have Node info for non-privileged user\n2095573 - Disks created as environment or scripts should have proper label\n2095953 - VNC console controls layout\n2095955 - VNC console tabs\n2096166 - Template \"vm-template-example\" is binding with namespace \"default\"\n2096206 - Inconsistent capitalization in Template Actions\n2096208 - Templates in the catalog list is not sorted\n2096263 - Incorrectly displaying units for Disks size or Memory field in various places\n2096333 - virtualization overview, related operators title is not aligned\n2096492 - Cannot create vm from a cloned template if its boot source is edited\n2096502 - \"Restore template settings\" should be removed from template CPU editor\n2096510 - VM can be created without any disk\n2096511 - Template shows \"no Boot Source\" and label \"Source available\" at the same time\n2096620 - in templates list, edit boot reference kebab action opens a modal with different title\n2096781 - Remove boot source provider while edit boot source reference\n2096801 - vnc thumbnail in virtual machine overview should be active on page load\n2096845 - Windows template\u0027s scripts tab is crashed\n2097328 - virtctl guestfs shouldn\u0027t required uid = 0\n2097370 - missing titles for optional parameters in wizard customization page\n2097465 - Count is not updating for \u0027prometheusrule\u0027 component when metrics kubevirt_hco_out_of_band_modifications_count executed\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2098134 - \"Workload profile\" column is not showing completely in template list\n2098135 - Workload is not showing correct in catalog after change the template\u0027s workload\n2098282 - Javascript error when changing boot source of custom template to be an uploaded file\n2099443 - No \"Quick create virtualmachine\" button for template \u0027vm-template-example\u0027\n2099533 - ConsoleQuickStart for HCO CR\u0027s VM is missing\n2099535 - The cdi-uploadproxy certificate url should be opened in a new tab\n2099539 - No storage option for upload while editing a disk\n2099566 - Cloudinit should be replaced by cloud-init in all places\n2099608 - \"DynamicB\" shows in vm-example disk size\n2099633 - Doc links needs to be updated\n2099639 - Remove user line from the ssh command section\n2099802 - Details card link shouldn\u0027t be hard-coded\n2100054 - Windows VM with WSL2 guest fails to migrate\n2100284 - Virtualization overview is crashed\n2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101485 - Cloudinit should be replaced by cloud-init in all places\n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer\n2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2102122 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2102124 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102127 - Cannot add NIC to VM template as non-priv user\n2102129 - All templates are labeling \"source available\" in template list page\n2102131 - The number of hardware devices is not correct in vm overview tab\n2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2102143 - vm clone modal is displaying DV size instead of PVC size\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102543 - Add button moved to right\n2102544 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102545 - VM filter has two \"Other\" checkboxes which are triggered together\n2104617 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106258 - All pages are crashed after visit Virtualization -\u003e Overview\n2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions\n2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111562 - kubevirt plugin console crashed after visit vmi page\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled\n2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled\n2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "db": "PACKETSTORM",
        "id": "167188"
      },
      {
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "PACKETSTORM",
        "id": "167225"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-0778",
        "trust": 3.5
      },
      {
        "db": "PACKETSTORM",
        "id": "167344",
        "trust": 1.0
      },
      {
        "db": "TENABLE",
        "id": "TNS-2022-09",
        "trust": 1.0
      },
      {
        "db": "TENABLE",
        "id": "TNS-2022-06",
        "trust": 1.0
      },
      {
        "db": "TENABLE",
        "id": "TNS-2022-08",
        "trust": 1.0
      },
      {
        "db": "TENABLE",
        "id": "TNS-2022-07",
        "trust": 1.0
      },
      {
        "db": "SIEMENS",
        "id": "SSA-712929",
        "trust": 1.0
      },
      {
        "db": "JVN",
        "id": "JVNVU99682885",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96890975",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90813125",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98905589",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99030761",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU91676340",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU91198149",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU92169998",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-25-259-06",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-046-02",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-143-02",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-25-226-21",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-272-02",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-059-01",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476",
        "trust": 0.8
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0778",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166818",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167188",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167371",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167555",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166504",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166502",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168392",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167225",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "db": "PACKETSTORM",
        "id": "167188"
      },
      {
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "PACKETSTORM",
        "id": "167225"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "id": "VAR-202203-0005",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.2376099833333333
  },
  "last_update_date": "2025-12-22T22:11:54.518000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "hitachi-sec-2022-132 Software product security information",
        "trust": 0.8,
        "url": "https://www.openssl.org/news/secadv/20220315.txt"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2022-1575",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1575"
      },
      {
        "title": "Debian Security Advisories: DSA-5103-1 openssl -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=4ecbdda56426ff105b6a2939daf5c4e7"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221077 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221078 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221082 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221073 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221091 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221076 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221071 - Security Advisory"
      },
      {
        "title": "Red Hat: Low: compat-openssl10 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225326 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat JBoss Web Server 5.6.2 Security Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221520 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221112 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: compat-openssl11 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224899 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221065 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat JBoss Web Server 5.6.2 Security Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221519 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221066 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2022-1766",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1766"
      },
      {
        "title": "Amazon Linux 2: ALAS2NITRO-ENCLAVES-2022-018",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2NITRO-ENCLAVES-2022-018"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-0778"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.10 security and extras update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221357 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.9.29 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221363 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.8.37 security and extras update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221370 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.10 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221356 - Security Advisory"
      },
      {
        "title": "Tenable Security Advisories: [R1] Nessus Agent Versions 8.3.3 and 10.1.3 Fix One Third-Party Vulnerability",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2022-07"
      },
      {
        "title": "Tenable Security Advisories: [R1] Nessus Versions 8.15.4 and 10.1.2 Fix One Third-Party Vulnerability",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2022-06"
      },
      {
        "title": "Tenable Security Advisories: [R1] Stand-alone Security Patch Available for Tenable.sc versions 5.19.0 to 5.20.1: Patch 202204.1",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2022-08"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-041",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-041"
      },
      {
        "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP11 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221390 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Virtualization 4.10.1 Images security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224668 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP11 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221389 - Security Advisory"
      },
      {
        "title": "Hitachi Security Advisories: Vulnerability in Hitachi Configuration Manager and Hitachi Ops Center API Configuration Manager",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2022-121"
      },
      {
        "title": "Hitachi Security Advisories: Vulnerability in JP1",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2022-132"
      },
      {
        "title": "Hitachi Security Advisories: Vulnerability in Cosminexus HTTP Server",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2022-118"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.1.2.1 containers security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221739 - Security Advisory"
      },
      {
        "title": "Brocade Security Advisories: Access Denied",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=af28f1c934f899990fae4f8d3f165957"
      },
      {
        "title": "Palo Alto Networks Security Advisory: CVE-2022-0778 Impact of the OpenSSL Infinite Loop Vulnerability CVE-2022-0778",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=aae1a589daaf238d6814b018feedaec7"
      },
      {
        "title": "Red Hat: Important: RHV-H security update (redhat-virtualization-host) 4.3.22",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221263 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224690 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: RHACS 3.68 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225132 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Logging Security and Bug update Release 5.4.1",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222216 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Openshift Logging Security and Bug update Release (5.2.10)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222218 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Logging Security and Bug update Release 5.3.7",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222217 - Security Advisory"
      },
      {
        "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer, Hitachi Ops Center Analyzer viewpoint and Hitachi Ops Center Viewpoint",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2022-126"
      },
      {
        "title": "Tenable Security Advisories: [R1] Tenable.sc 5.21.0 Fixes Multiple Third-Party Vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2022-09"
      },
      {
        "title": "Palo Alto Networks Security Advisory: CVE-2022-22963 Informational: Impact of Spring Vulnerabilities CVE-2022-22963 and CVE-2010-1622 Bypass",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=bb2470489013d7c39502e755acaa670b"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6.57 security and extras update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221622 - Security Advisory"
      },
      {
        "title": "Red Hat: Low: Release of OpenShift Serverless  Version 1.22.0",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221747 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.1 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221734 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.3 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225840 - Security Advisory"
      },
      {
        "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Command Suite, Hitachi Automation Director, Hitachi Configuration Manager, Hitachi Infrastructure Analytics Advisor and Hitachi Ops Center",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2023-126"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221476 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.0 extras and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225070 - Security Advisory"
      },
      {
        "title": "Apple: macOS Monterey 12.4",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=73857ee26a600b1527481f1deacc0619"
      },
      {
        "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224956 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.5.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221396 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALASMARIADB10.5-2023-003",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALASMARIADB10.5-2023-003"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-182",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-182"
      },
      {
        "title": "CVE-2022-0778",
        "trust": 0.1,
        "url": "https://github.com/jeongjunsoo/CVE-2022-0778 "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-835",
        "trust": 1.0
      },
      {
        "problemtype": "infinite loop (CWE-835) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/gdb3gqvjpxje7x5c5jn6jaa4xudwd6e6/"
      },
      {
        "trust": 1.0,
        "url": "https://support.apple.com/kb/ht213257"
      },
      {
        "trust": 1.0,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2022-0002"
      },
      {
        "trust": 1.0,
        "url": "https://security.gentoo.org/glsa/202210-02"
      },
      {
        "trust": 1.0,
        "url": "http://seclists.org/fulldisclosure/2022/may/35"
      },
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/167344/openssl-1.0.2-1.1.1-3.0-bn_mod_sqrt-infinite-loop.html"
      },
      {
        "trust": 1.0,
        "url": "https://www.tenable.com/security/tns-2022-09"
      },
      {
        "trust": 1.0,
        "url": "https://support.apple.com/kb/ht213256"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=380085481c64de749a6dd25cdf0bcf4360b30f83"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=a466912611aa6cbdf550cd10601390e587451246"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/323snn6zx7prjjwp2buaflpuae42xwlz/"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 1.0,
        "url": "https://www.tenable.com/security/tns-2022-06"
      },
      {
        "trust": 1.0,
        "url": "http://seclists.org/fulldisclosure/2022/may/33"
      },
      {
        "trust": 1.0,
        "url": "https://www.tenable.com/security/tns-2022-08"
      },
      {
        "trust": 1.0,
        "url": "https://www.tenable.com/security/tns-2022-07"
      },
      {
        "trust": 1.0,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.0,
        "url": "https://support.apple.com/kb/ht213255"
      },
      {
        "trust": 1.0,
        "url": "http://seclists.org/fulldisclosure/2022/may/38"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20220321-0002/"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20220429-0005/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.debian.org/debian-lts-announce/2022/03/msg00023.html"
      },
      {
        "trust": 1.0,
        "url": "https://lists.debian.org/debian-lts-announce/2022/03/msg00024.html"
      },
      {
        "trust": 1.0,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-712929.pdf"
      },
      {
        "trust": 1.0,
        "url": "https://www.openssl.org/news/secadv/20220315.txt"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=3118eb64934499d93db3230748a452351d1d9a65"
      },
      {
        "trust": 1.0,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.0,
        "url": "https://www.debian.org/security/2022/dsa-5103"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/w6k3pr542dxwleffmfidmme4cwmhjrmg/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90813125/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99682885/index.html"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu98905589/index.html"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu96890975/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu91676340/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu91198149/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu92169998/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99030761/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-272-02"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-059-01"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-143-02"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-046-02"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-25-226-21"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-25-259-06"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24761"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1356"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhba-2022:1355"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23308"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22663"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0128"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4187"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44790"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22674"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213256."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0530"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44224"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26698"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26697"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4173"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45444"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22675"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22720"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26706"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22665"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4166"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhba-2022:1369"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1370"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24769"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5132"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1902"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1902"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1082"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1078"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24921"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1798"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1621"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24904"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24905"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24904"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:4690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29165"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41617"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29165"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24905"
      }
    ],
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "db": "PACKETSTORM",
        "id": "167188"
      },
      {
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "PACKETSTORM",
        "id": "167225"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "db": "PACKETSTORM",
        "id": "167188"
      },
      {
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "PACKETSTORM",
        "id": "167225"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "date": "2022-04-22T14:13:53",
        "db": "PACKETSTORM",
        "id": "166818"
      },
      {
        "date": "2022-05-17T16:59:42",
        "db": "PACKETSTORM",
        "id": "167188"
      },
      {
        "date": "2022-06-03T15:30:23",
        "db": "PACKETSTORM",
        "id": "167371"
      },
      {
        "date": "2022-06-21T15:22:18",
        "db": "PACKETSTORM",
        "id": "167555"
      },
      {
        "date": "2022-03-28T15:55:39",
        "db": "PACKETSTORM",
        "id": "166504"
      },
      {
        "date": "2022-03-28T15:55:23",
        "db": "PACKETSTORM",
        "id": "166502"
      },
      {
        "date": "2022-09-15T14:20:18",
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "date": "2022-05-19T15:53:12",
        "db": "PACKETSTORM",
        "id": "167225"
      },
      {
        "date": "2022-03-17T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "date": "2022-03-15T17:15:08.513000",
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0778"
      },
      {
        "date": "2025-09-22T01:16:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      },
      {
        "date": "2024-11-21T06:39:22.540000",
        "db": "NVD",
        "id": "CVE-2022-0778"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL\u00a0 of \u00a0BN_mod_sqrt()\u00a0 Problem that causes an infinite loop when the law in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001476"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code execution",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "167188"
      }
    ],
    "trust": 0.1
  }
}

VAR-202101-0119

Vulnerability from variot - Updated: 2025-12-22 21:27

The iconv feature in the GNU C Library (aka glibc or libc6) through 2.32, when processing invalid multi-byte input sequences in the EUC-KR encoding, may have a buffer over-read. 8) - aarch64, ppc64le, s390x, x86_64

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.4 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):

1428290 - CVE-2016-10228 glibc: iconv program can hang when invoked with the -c option 1684057 - CVE-2019-9169 glibc: regular-expression match via proceed_next_node in posix/regexec.c leads to heap-based buffer over-read 1704868 - CVE-2016-10228 glibc: iconv: Fix converter hangs and front end option parsing for //TRANSLIT and //IGNORE [rhel-8] 1855790 - glibc: Update Intel CET support from upstream 1856398 - glibc: Build with -moutline-atomics on aarch64 1868106 - glibc: Transaction ID collisions cause slow DNS lookups in getaddrinfo 1871385 - glibc: Improve auditing implementation (including DT_AUDIT, and DT_DEPAUDIT) 1871387 - glibc: Improve IBM POWER9 architecture performance 1871394 - glibc: Fix AVX2 off-by-one error in strncmp (swbz#25933) 1871395 - glibc: Improve IBM Z (s390x) Performance 1871396 - glibc: Improve use of static TLS surplus for optimizations. Bugs fixed (https://bugzilla.redhat.com/):

1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation

  1. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve

  1. JIRA issues fixed (https://issues.jboss.org/):

TRACING-1725 - Elasticsearch operator reports x509 errors communicating with ElasticSearch in OpenShift Service Mesh project

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the name service cache daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly.

Bug Fix(es):

  • glibc: 64bit_strstr_via_64bit_strstr_sse2_unaligned detection fails with large device and inode numbers (BZ#1883162)

  • glibc: Performance regression in ebizzy benchmark (BZ#1889977)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

For the update to take effect, all services linked to the glibc library must be restarted, or the system rebooted. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: glibc-2.17-322.el7_9.src.rpm

x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: glibc-2.17-322.el7_9.src.rpm

x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-static-2.17-322.el7_9.i686.rpm glibc-static-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: glibc-2.17-322.el7_9.src.rpm

ppc64: glibc-2.17-322.el7_9.ppc.rpm glibc-2.17-322.el7_9.ppc64.rpm glibc-common-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-2.17-322.el7_9.ppc.rpm glibc-debuginfo-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm glibc-devel-2.17-322.el7_9.ppc.rpm glibc-devel-2.17-322.el7_9.ppc64.rpm glibc-headers-2.17-322.el7_9.ppc64.rpm glibc-utils-2.17-322.el7_9.ppc64.rpm nscd-2.17-322.el7_9.ppc64.rpm

ppc64le: glibc-2.17-322.el7_9.ppc64le.rpm glibc-common-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm glibc-devel-2.17-322.el7_9.ppc64le.rpm glibc-headers-2.17-322.el7_9.ppc64le.rpm glibc-utils-2.17-322.el7_9.ppc64le.rpm nscd-2.17-322.el7_9.ppc64le.rpm

s390x: glibc-2.17-322.el7_9.s390.rpm glibc-2.17-322.el7_9.s390x.rpm glibc-common-2.17-322.el7_9.s390x.rpm glibc-debuginfo-2.17-322.el7_9.s390.rpm glibc-debuginfo-2.17-322.el7_9.s390x.rpm glibc-debuginfo-common-2.17-322.el7_9.s390.rpm glibc-debuginfo-common-2.17-322.el7_9.s390x.rpm glibc-devel-2.17-322.el7_9.s390.rpm glibc-devel-2.17-322.el7_9.s390x.rpm glibc-headers-2.17-322.el7_9.s390x.rpm glibc-utils-2.17-322.el7_9.s390x.rpm nscd-2.17-322.el7_9.s390x.rpm

x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: glibc-debuginfo-2.17-322.el7_9.ppc.rpm glibc-debuginfo-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm glibc-static-2.17-322.el7_9.ppc.rpm glibc-static-2.17-322.el7_9.ppc64.rpm

ppc64le: glibc-debuginfo-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm glibc-static-2.17-322.el7_9.ppc64le.rpm

s390x: glibc-debuginfo-2.17-322.el7_9.s390.rpm glibc-debuginfo-2.17-322.el7_9.s390x.rpm glibc-debuginfo-common-2.17-322.el7_9.s390.rpm glibc-debuginfo-common-2.17-322.el7_9.s390x.rpm glibc-static-2.17-322.el7_9.s390.rpm glibc-static-2.17-322.el7_9.s390x.rpm

x86_64: glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-static-2.17-322.el7_9.i686.rpm glibc-static-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: glibc-2.17-322.el7_9.src.rpm

x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat OpenShift Serverless 1.17.0 release of the OpenShift Serverless Operator.

Security Fix(es):

  • golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
  • golang: net: lookup functions may return invalid host names (CVE-2021-33195)
  • golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
  • golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
  • golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
  • golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
  • golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)

It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. This has been fixed (CVE-2021-3703).

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):

1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196

5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0119",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "32"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "33"
      },
      {
        "model": "ontap select deploy administration utility",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "service processor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "500f",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "a250",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fabric operating system",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "broadcom",
        "version": null
      },
      {
        "model": "glibc",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "gnu",
        "version": "2.32"
      },
      {
        "model": "fas/aff baseboard management controller 500f",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "c library",
        "scope": null,
        "trust": 0.8,
        "vendor": "gnu",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "service processor",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fabric operating system",
        "scope": null,
        "trust": 0.8,
        "vendor": "broadcom",
        "version": null
      },
      {
        "model": "ontap select deploy administration utility",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas/aff baseboard management controller a250",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164192"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2019-25013",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.1,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-25013",
            "impactScore": 6.9,
            "integrityImpact": "NONE",
            "severity": "HIGH",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 5.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "id": "CVE-2019-25013",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 5.9,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2019-25013",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-25013",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2019-25013",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2019-25013",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202101-048",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-25013",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The iconv feature in the GNU C Library (aka glibc or libc6) through 2.32, when processing invalid multi-byte input sequences in the EUC-KR encoding, may have a buffer over-read. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.4 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1428290 - CVE-2016-10228 glibc: iconv program can hang when invoked with the -c option\n1684057 - CVE-2019-9169 glibc: regular-expression match via proceed_next_node in posix/regexec.c leads to heap-based buffer over-read\n1704868 - CVE-2016-10228 glibc: iconv: Fix converter hangs and front end option parsing for //TRANSLIT and //IGNORE [rhel-8]\n1855790 - glibc: Update Intel CET support from upstream\n1856398 - glibc: Build with -moutline-atomics on aarch64\n1868106 - glibc: Transaction ID collisions cause slow DNS lookups in getaddrinfo\n1871385 - glibc: Improve auditing implementation (including DT_AUDIT, and DT_DEPAUDIT)\n1871387 - glibc: Improve IBM POWER9 architecture performance\n1871394 - glibc: Fix AVX2 off-by-one error in strncmp (swbz#25933)\n1871395 - glibc: Improve IBM Z (s390x) Performance\n1871396 - glibc: Improve use of static TLS surplus for optimizations. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTRACING-1725 -  Elasticsearch operator reports x509 errors communicating with ElasticSearch in OpenShift Service Mesh project\n\n6. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe glibc packages provide the standard C libraries (libc), POSIX thread\nlibraries (libpthread), standard math libraries (libm), and the name\nservice cache daemon (nscd) used by multiple programs on the system. \nWithout these libraries, the Linux system cannot function correctly. \n\nBug Fix(es):\n\n* glibc: 64bit_strstr_via_64bit_strstr_sse2_unaligned detection fails with\nlarge device and inode numbers (BZ#1883162)\n\n* glibc: Performance regression in ebizzy benchmark (BZ#1889977)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the glibc library\nmust be restarted, or the system rebooted. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-static-2.17-322.el7_9.i686.rpm\nglibc-static-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nppc64:\nglibc-2.17-322.el7_9.ppc.rpm\nglibc-2.17-322.el7_9.ppc64.rpm\nglibc-common-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm\nglibc-devel-2.17-322.el7_9.ppc.rpm\nglibc-devel-2.17-322.el7_9.ppc64.rpm\nglibc-headers-2.17-322.el7_9.ppc64.rpm\nglibc-utils-2.17-322.el7_9.ppc64.rpm\nnscd-2.17-322.el7_9.ppc64.rpm\n\nppc64le:\nglibc-2.17-322.el7_9.ppc64le.rpm\nglibc-common-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm\nglibc-devel-2.17-322.el7_9.ppc64le.rpm\nglibc-headers-2.17-322.el7_9.ppc64le.rpm\nglibc-utils-2.17-322.el7_9.ppc64le.rpm\nnscd-2.17-322.el7_9.ppc64le.rpm\n\ns390x:\nglibc-2.17-322.el7_9.s390.rpm\nglibc-2.17-322.el7_9.s390x.rpm\nglibc-common-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390x.rpm\nglibc-devel-2.17-322.el7_9.s390.rpm\nglibc-devel-2.17-322.el7_9.s390x.rpm\nglibc-headers-2.17-322.el7_9.s390x.rpm\nglibc-utils-2.17-322.el7_9.s390x.rpm\nnscd-2.17-322.el7_9.s390x.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nglibc-debuginfo-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm\nglibc-static-2.17-322.el7_9.ppc.rpm\nglibc-static-2.17-322.el7_9.ppc64.rpm\n\nppc64le:\nglibc-debuginfo-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm\nglibc-static-2.17-322.el7_9.ppc64le.rpm\n\ns390x:\nglibc-debuginfo-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390x.rpm\nglibc-static-2.17-322.el7_9.s390.rpm\nglibc-static-2.17-322.el7_9.s390x.rpm\n\nx86_64:\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-static-2.17-322.el7_9.i686.rpm\nglibc-static-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat OpenShift Serverless 1.17.0 release of the OpenShift Serverless\nOperator. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. This has been fixed (CVE-2021-3703). \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164192"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-25013",
        "trust": 4.0
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-166-10",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99464755",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162634",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "162837",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "163267",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "163496",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "161254",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164192",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "163789",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "163276",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "163406",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "162877",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0868",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6426",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2228",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2180",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0875",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0373",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0728",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0743",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2711",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1866",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3141",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4058",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1820",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5140",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1743",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4222",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2604",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2365",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2781",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022011038",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031430",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021071310",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021070604",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062703",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062315",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021071516",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021122914",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092220",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-25013",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164192"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "id": "VAR-202101-0119",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.465277775
  },
  "last_update_date": "2025-12-22T21:27:57.360000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Bug\u00a024973 NetAppNetApp\u00a0Advisory",
        "trust": 0.8,
        "url": "https://www.broadcom.com/"
      },
      {
        "title": "GNU C Library Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=138312"
      },
      {
        "title": "Debian CVElist Bug Report Logs: glibc: CVE-2019-25013",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=7073abdc63eae799f90555726b8fbe41"
      },
      {
        "title": "Red Hat: Moderate: glibc security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210348 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2021-1599",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2021-1599"
      },
      {
        "title": "Ubuntu Security Notice: USN-5768-1: GNU C Library vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5768-1"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2019-25013 log"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2021-1511",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2021-1511"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202102-18] glibc: denial of service",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202102-18"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202102-17] lib32-glibc: denial of service",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202102-17"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.1.3 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210607 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2021-1605",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2021-1605"
      },
      {
        "title": "Ubuntu Security Notice: USN-5310-1: GNU C Library vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5310-1"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: Cloud Pak for Security contains security vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=08f19f0be4d5dcf7486e5abcdb671477"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2019-25013 "
      },
      {
        "title": "ecr-api",
        "trust": 0.1,
        "url": "https://github.com/YaleSpinup/ecr-api "
      },
      {
        "title": "sanction",
        "trust": 0.1,
        "url": "https://github.com/ctc-oss/sanction "
      },
      {
        "title": "release-the-code-litecoin",
        "trust": 0.1,
        "url": "https://github.com/brandoncamenisch/release-the-code-litecoin "
      },
      {
        "title": "interview_project",
        "trust": 0.1,
        "url": "https://github.com/domyrtille/interview_project "
      },
      {
        "title": "trivy-multiscanner",
        "trust": 0.1,
        "url": "https://github.com/onzack/trivy-multiscanner "
      },
      {
        "title": "spring-boot-app-with-log4j-vuln",
        "trust": 0.1,
        "url": "https://github.com/nedenwalker/spring-boot-app-with-log4j-vuln "
      },
      {
        "title": "giant-squid",
        "trust": 0.1,
        "url": "https://github.com/dispera/giant-squid "
      },
      {
        "title": "devops-demo",
        "trust": 0.1,
        "url": "https://github.com/epequeno/devops-demo "
      },
      {
        "title": "spring-boot-app-using-gradle",
        "trust": 0.1,
        "url": "https://github.com/nedenwalker/spring-boot-app-using-gradle "
      },
      {
        "title": "xyz-solutions",
        "trust": 0.1,
        "url": "https://github.com/sauliuspr/xyz-solutions "
      },
      {
        "title": "myapp-container-jaxrs",
        "trust": 0.1,
        "url": "https://github.com/akiraabe/myapp-container-jaxrs "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-125",
        "trust": 1.0
      },
      {
        "problemtype": "Out-of-bounds read (CWE-125) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 1.6,
        "url": "https://security.netapp.com/advisory/ntap-20210205-0004/"
      },
      {
        "trust": 1.6,
        "url": "https://security.gentoo.org/glsa/202107-07"
      },
      {
        "trust": 1.6,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.6,
        "url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00021.html"
      },
      {
        "trust": 1.6,
        "url": "https://sourceware.org/bugzilla/show_bug.cgi?id=24973"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r32d767ac804e9b8aad4355bb85960a6a1385eab7afff549a5e98660f%40%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r5af4430421bb6f9973294691a7904bbd260937e9eef96b20556f43ff%40%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r750eee18542bc02bd8350861c424ee60a9b9b225568fa09436a37ece%40%3cissues.zookeeper.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r448bb851cc8e6e3f93f3c28c70032b37062625d81214744474ac49e7%40%3cdev.kafka.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/rd2354f9ccce41e494fbadcbc5ad87218de6ec0fff8a7b54c8462226c%40%3cissues.zookeeper.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r7a2e94adfe0a2f0a1d42e4927e8c32ecac97d37db9cb68095fe9ddbc%40%3cdev.zookeeper.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772%40%3cdev.mina.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r4806a391091e082bdea17266452ca656ebc176e51bb3932733b3a0a2%40%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/tvcunlq3hxgs4vpuqkwtjgraw2ktfgxs/"
      },
      {
        "trust": 1.0,
        "url": "https://sourceware.org/git/?p=glibc.git%3ba=commit%3bh=ee7a3144c9922808181009b7b3e50e852fb4999b"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4y6tx47p47kabsfol26fldnvcwxdkdez/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r499e4f96d0b5109ef083f2feccd33c51650c1b7d7068aa3bd47efca9%40%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99464755/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-166-10"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.6,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r5af4430421bb6f9973294691a7904bbd260937e9eef96b20556f43ff@%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r7a2e94adfe0a2f0a1d42e4927e8c32ecac97d37db9cb68095fe9ddbc@%3cdev.zookeeper.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r448bb851cc8e6e3f93f3c28c70032b37062625d81214744474ac49e7@%3cdev.kafka.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://sourceware.org/git/?p=glibc.git;a=commit;h=ee7a3144c9922808181009b7b3e50e852fb4999b"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r4806a391091e082bdea17266452ca656ebc176e51bb3932733b3a0a2@%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772@%3cdev.mina.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rd2354f9ccce41e494fbadcbc5ad87218de6ec0fff8a7b54c8462226c@%3cissues.zookeeper.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/tvcunlq3hxgs4vpuqkwtjgraw2ktfgxs/"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r750eee18542bc02bd8350861c424ee60a9b9b225568fa09436a37ece@%3cissues.zookeeper.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r499e4f96d0b5109ef083f2feccd33c51650c1b7d7068aa3bd47efca9@%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4y6tx47p47kabsfol26fldnvcwxdkdez/"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r32d767ac804e9b8aad4355bb85960a6a1385eab7afff549a5e98660f@%3cjira.kafka.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164192/red-hat-security-advisory-2021-3556-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168011/red-hat-security-advisory-2022-5924-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163789/red-hat-security-advisory-2021-3119-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-cloud-pak-for-security-contains-security-vulnerabilities/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1866"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1743"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1820"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2711"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021071310"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163747/red-hat-security-advisory-2021-3016-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2781"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5140"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0373/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031430"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2365"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2180"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021122914"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162634/red-hat-security-advisory-2021-1585-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163276/red-hat-security-advisory-2021-2543-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0875"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/glibc-out-of-bounds-memory-reading-via-iconv-euc-kr-encoding-34360"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0728"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163496/red-hat-security-advisory-2021-2705-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0743"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2228"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062703"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092220"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0868"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6520474"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162837/red-hat-security-advisory-2021-2136-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163267/red-hat-security-advisory-2021-2532-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022011038"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161254/red-hat-security-advisory-2021-0348-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021070604"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021071516"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162877/red-hat-security-advisory-2021-2121-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062315"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4058"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4222"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163406/gentoo-linux-security-advisory-202107-07.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3141"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6426"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.2,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31525"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27918"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33196"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1585"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14347"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27835"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25704"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10878"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9948"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13012"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0431"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14345"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26137"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14360"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12464"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14314"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14360"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14356"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25643"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9983"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24394"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0431"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0342"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14344"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-u"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14345"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14344"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14361"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25285"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35508"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25212"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28974"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15437"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14346"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14356"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2705"
      },
      {
        "trust": 0.1,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10029"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0348"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29573"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29573"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33195"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33198"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-34558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3556"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3421"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3703"
      }
    ],
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164192"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164192"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-01-04T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "date": "2021-05-19T13:59:56",
        "db": "PACKETSTORM",
        "id": "162634"
      },
      {
        "date": "2021-05-27T13:28:54",
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "date": "2021-06-23T16:08:25",
        "db": "PACKETSTORM",
        "id": "163267"
      },
      {
        "date": "2021-07-14T15:02:07",
        "db": "PACKETSTORM",
        "id": "163496"
      },
      {
        "date": "2021-02-02T16:12:10",
        "db": "PACKETSTORM",
        "id": "161254"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2021-09-17T16:04:56",
        "db": "PACKETSTORM",
        "id": "164192"
      },
      {
        "date": "2021-01-04T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "date": "2021-09-10T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "date": "2021-01-04T18:15:13.027000",
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-25013"
      },
      {
        "date": "2022-12-12T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      },
      {
        "date": "2023-06-16T05:32:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      },
      {
        "date": "2025-06-09T16:15:30.703000",
        "db": "NVD",
        "id": "CVE-2019-25013"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "GNU\u00a0C\u00a0Library\u00a0 Out-of-bounds read vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-016179"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202101-048"
      }
    ],
    "trust": 0.6
  }
}

VAR-202104-1514

Vulnerability from variot - Updated: 2024-11-23 22:05

GNU Wget through 1.21.1 does not omit the Authorization header upon a redirect to a different origin, a related issue to CVE-2018-1000007. GNU Wget is a set of free software developed by the GNU Project (Gnu Project Development) for downloading on the Internet. It supports downloading through the three most common TCP/IP protocols: HTTP, HTTPS and FTP. There is a security vulnerability in GNU Wget 1.21.1 and earlier versions. The vulnerability is caused by not ignoring Authorization when redirecting to a different source

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202104-1514",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "ontap select deploy administration utility",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "a250",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "cloud backup",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "wget",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "gnu",
        "version": "1.21.1"
      },
      {
        "model": "brocade fabric operating system",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "broadcom",
        "version": null
      },
      {
        "model": "500f",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "cve": "CVE-2021-31879",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-31879",
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 5.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-391716",
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.1,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 2.8,
            "id": "CVE-2021-31879",
            "impactScore": 2.7,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-31879",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202104-2167",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-391716",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-31879",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "GNU Wget through 1.21.1 does not omit the Authorization header upon a redirect to a different origin, a related issue to CVE-2018-1000007. GNU Wget is a set of free software developed by the GNU Project (Gnu Project Development) for downloading on the Internet. It supports downloading through the three most common TCP/IP protocols: HTTP, HTTPS and FTP. There is a security vulnerability in GNU Wget 1.21.1 and earlier versions. The vulnerability is caused by not ignoring Authorization when redirecting to a different source",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      },
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      }
    ],
    "trust": 1.08
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-31879",
        "trust": 1.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-391716",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "id": "VAR-202104-1514",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-11-23T22:05:08.872000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "GNU Wget Enter the fix for the verification error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=149520"
      },
      {
        "title": "Debian CVElist Bug Report Logs: CVE-2021-31879",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=ba1029a7c2538da0d8a896c8ad6f31c8"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-31879 log"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-134",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-134"
      },
      {
        "title": "KCC",
        "trust": 0.1,
        "url": "https://github.com/dgardella/KCC "
      },
      {
        "title": "log4jnotes",
        "trust": 0.1,
        "url": "https://github.com/kenlavbah/log4jnotes "
      },
      {
        "title": "devops-demo",
        "trust": 0.1,
        "url": "https://github.com/epequeno/devops-demo "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-601",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://mail.gnu.org/archive/html/bug-wget/2021-02/msg00002.html"
      },
      {
        "trust": 1.2,
        "url": "https://security.netapp.com/advisory/ntap-20210618-0002/"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31879"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/601.html"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988209"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2022/alas-2022-134.html"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-04-29T00:00:00",
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "date": "2021-04-29T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "date": "2021-04-29T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "date": "2021-04-29T05:15:08.707000",
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-05-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-391716"
      },
      {
        "date": "2022-05-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-31879"
      },
      {
        "date": "2021-05-07T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      },
      {
        "date": "2024-11-21T06:06:25.020000",
        "db": "NVD",
        "id": "CVE-2021-31879"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "GNU Wget Input validation error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "input validation error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-2167"
      }
    ],
    "trust": 0.6
  }
}

VAR-202011-0423

Vulnerability from variot - Updated: 2024-11-23 19:46

Use-after-free vulnerability in fs/block_dev.c in the Linux kernel before 5.8 allows local users to gain privileges or cause a denial of service by leveraging improper access to a certain error field. Linux Kernel Exists in a vulnerability related to the use of freed memory.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state.

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

Bug fix:

  • RHACM 2.0.8 images (BZ #1915461)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1915461 - RHACM 2.0.8 images 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation

  1. 7.6) - ppc64, ppc64le, x86_64

Bug Fix(es):

  • [infiniband] Backport Request to fix Multicast Sendonly joins (BZ#1937820)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899804 - CVE-2020-28374 kernel: SCSI target (LIO) write to any block on ILO backstore 1901168 - CVE-2020-15436 kernel: use-after-free in fs/block_dev.c 1930078 - CVE-2021-27365 kernel: heap buffer overflow in the iSCSI subsystem 1930079 - CVE-2021-27363 kernel: iscsi: unrestricted access to sessions and handles 1930080 - CVE-2021-27364 kernel: out-of-bounds read in libiscsi module

  1. 7) - noarch, x86_64

Bug Fix(es):

  • kernel-rt: update to the latest RHEL7.9.z3 source tree (BZ#1906133)

  • [kernel-rt] WARNING: CPU: 8 PID: 586 at kernel/sched/core.c:3644 migrate_enable+0x15f/0x210 (BZ#1916123)

  • [kernel-rt-debug] [ BUG: bad unlock balance detected! ] [RHEL-7.9.z] (BZ#1916130)

  • -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: kernel security, bug fix, and enhancement update Advisory ID: RHSA-2021:0336-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:0336 Issue date: 2021-02-02 CVE Names: CVE-2020-15436 CVE-2020-35513 ==================================================================== 1. Summary:

An update for kernel is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

The kernel packages contain the Linux kernel, the core of any Linux operating system.

Security Fix(es):

  • kernel: use-after-free in fs/block_dev.c (CVE-2020-15436)

  • kernel: Nfsd failure to clear umask after processing an open or create (CVE-2020-35513)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • double free issue in filelayout_alloc_commit_info (BZ#1679980)

  • Regression: Plantronics Device SHS2355-11 PTT button does not work after update to 7.7 (BZ#1769502)

  • Openstack network node reports unregister_netdevice: waiting for qr-3cec0c92-9a to become free. Usage count = 1 (BZ#1809519)

  • dlm: add ability to interrupt waiting for acquire POSIX lock (BZ#1826858)

  • [Azure][RHEL7] soft lockups and performance loss occurring during final fsync with parallel dd writes to xfs filesystem in azure instance (BZ#1859364)

  • Guest crashed when hotplug vcpus on booting kernel stage (BZ#1866138)

  • soft lockup occurs while a thread group leader is waiting on tasklist_waiters in mm_update_next_owner() where a huge number of the thread group members are exiting and trying to take the tasklist_lock. (BZ#1872110)

  • [DELL EMC 7.6 BUG] Kioxia CM6 NVMe drive fails to enumerate (BZ#1883403)

  • [Hyper-V][RHEL7] Request to included a commit that adds a timeout to vmbus_wait_for_unload (BZ#1888979)

  • Unable to discover the LUNs from new storage port (BZ#1889311)

  • RHEL 7.9 Kernel panic at ceph_put_snap_realm+0x21 (BZ#1890386)

  • A hard lockup occurrs where one task is looping in an sk_lock spinlock that has been taken by another task running timespec64_add_ns(). (BZ#1890911)

  • ethtool/mlx5_core provides incorrect SFP module info (BZ#1896756)

  • RHEL7.7 - zcrypt: Fix ZCRYPT_PERDEV_REQCNT ioctl (BZ#1896826)

  • RHEL7.7 - s390/dasd: Fix zero write for FBA devices (BZ#1896839)

  • [Azure]IP forwarding issue in netvsc[7.9.z] (BZ#1898280)

  • Security patch for CVE-2020-25212 breaks directory listings via 'ls' on NFS V4.2 shares mounted with selinux enabled labels (BZ#1917504)

Enhancement(s):

  • RFE : handle better ERRbaduid on SMB1 (BZ#1847041)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

The system must be rebooted for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1901168 - CVE-2020-15436 kernel: use-after-free in fs/block_dev.c 1905208 - CVE-2020-35513 kernel: fix nfsd failure to clear umask after processing an open or create [rhel-7] 1911309 - CVE-2020-35513 kernel: Nfsd failure to clear umask after processing an open or create 1917504 - Security patch for CVE-2020-25212 breaks directory listings via 'ls' on NFS V4.2 shares mounted with selinux enabled labels

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: kernel-3.10.0-1160.15.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm kernel-doc-3.10.0-1160.15.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.15.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm perf-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

x86_64: bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: kernel-3.10.0-1160.15.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm kernel-doc-3.10.0-1160.15.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.15.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm perf-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: kernel-3.10.0-1160.15.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm kernel-doc-3.10.0-1160.15.2.el7.noarch.rpm

ppc64: bpftool-3.10.0-1160.15.2.el7.ppc64.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-3.10.0-1160.15.2.el7.ppc64.rpm kernel-bootwrapper-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debug-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1160.15.2.el7.ppc64.rpm kernel-devel-3.10.0-1160.15.2.el7.ppc64.rpm kernel-headers-3.10.0-1160.15.2.el7.ppc64.rpm kernel-tools-3.10.0-1160.15.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.ppc64.rpm perf-3.10.0-1160.15.2.el7.ppc64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm python-perf-3.10.0-1160.15.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm

ppc64le: bpftool-3.10.0-1160.15.2.el7.ppc64le.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debug-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-devel-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-headers-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-tools-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.ppc64le.rpm perf-3.10.0-1160.15.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm python-perf-3.10.0-1160.15.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm

s390x: bpftool-3.10.0-1160.15.2.el7.s390x.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm kernel-3.10.0-1160.15.2.el7.s390x.rpm kernel-debug-3.10.0-1160.15.2.el7.s390x.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.s390x.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm kernel-debuginfo-common-s390x-3.10.0-1160.15.2.el7.s390x.rpm kernel-devel-3.10.0-1160.15.2.el7.s390x.rpm kernel-headers-3.10.0-1160.15.2.el7.s390x.rpm kernel-kdump-3.10.0-1160.15.2.el7.s390x.rpm kernel-kdump-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm kernel-kdump-devel-3.10.0-1160.15.2.el7.s390x.rpm perf-3.10.0-1160.15.2.el7.s390x.rpm perf-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm python-perf-3.10.0-1160.15.2.el7.s390x.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm

x86_64: bpftool-3.10.0-1160.15.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm perf-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: bpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1160.15.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.ppc64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm

ppc64le: bpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm

x86_64: bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: kernel-3.10.0-1160.15.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm kernel-doc-3.10.0-1160.15.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.15.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm perf-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

x86_64: bpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYBlBsdzjgjWX9erEAQhTZhAAmSFzEZeB0CWYNaJ2PVwoFm4PA9rdYDyg G1j/plxrO6bczNEz+XDnAzRPrCbJRPPt6VJxjpLkb25ph0f5tQ+Q7Ph7sAefSbDX BLDjjvl+Wd1g2FEfIQ43wDp8UWuFCVVMF3ajJHFz9ROqrA/1hs0gj7ht9gXRlttT LSI67A08tEWRPtaf5c1M8h/IJtZiF4sfYDrfhp4mFRTZYybTvVjML+xf69Qq7o2D AsxbyKRVNQKC0Epm6C+Tzbw6SxhonrAQyjADWenQ8bCS2TF8WY2OZA7sNs7nddZu Ha/mCB2vSR2WCWLGxCLXTtsK3y52qPIyUn4mBmatJUIBcbJMnQbgZgWrEcTobsoD N5MWdqE6xGjct0KMz0fV6J9D5JWQjUN4O8K0vVQP4aoAX25jMWCq14RLLRUvusJm dLI59E5nN1pLMlADiAAh2Iceac/daIF9fvWn2XoF16/ZQNffa0yCiNFaDg+AW4Tg Z/b82VoOiz7uJWyv06TMcljafEaIxjpnjGmpKQ2qz8UYoxYYsnRyKpHJxLeiB53A TKbkiQJoFutNeUcbBSA6F6sqLlaJ7CtoyzxsVVwM+LtYF1iUXqC+Hp6Gs5NB8WXr JQSrrv0X0H7sAu7FHCyL/ygMQK/IiZKiPxiRBZJH6pJz5OL8GVKxR1CSZmHXvgKo QPLPtfMOGPs=Hdxh -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . 7) - aarch64, noarch, ppc64le

  1. ========================================================================== Ubuntu Security Notice USN-4752-1 February 25, 2021

linux-oem-5.6 vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 20.04 LTS

Summary:

Several security issues were fixed in the Linux kernel.

Software Description: - linux-oem-5.6: Linux kernel for OEM systems

Details:

Daniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered that legacy pairing and secure-connections pairing authentication in the Bluetooth protocol could allow an unauthenticated user to complete authentication without pairing credentials via adjacent access. A physically proximate attacker could use this to impersonate a previously paired Bluetooth device. (CVE-2020-10135)

Jay Shin discovered that the ext4 file system implementation in the Linux kernel did not properly handle directory access with broken indexing, leading to an out-of-bounds read vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-14314)

It was discovered that the block layer implementation in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-15436)

It was discovered that the serial port driver in the Linux kernel did not properly initialize a pointer in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2020-15437)

Andy Nguyen discovered that the Bluetooth HCI event packet parser in the Linux kernel did not properly handle event advertisements of certain sizes, leading to a heap-based buffer overflow. A physically proximate remote attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-24490)

It was discovered that the NFS client implementation in the Linux kernel did not properly perform bounds checking before copying security labels in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25212)

It was discovered that the Rados block device (rbd) driver in the Linux kernel did not properly perform privilege checks for access to rbd devices in some situations. A local attacker could use this to map or unmap rbd block devices. (CVE-2020-25284)

It was discovered that the block layer subsystem in the Linux kernel did not properly handle zero-length requests. A local attacker could use this to cause a denial of service. (CVE-2020-25641)

It was discovered that the HDLC PPP implementation in the Linux kernel did not properly validate input in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25643)

Kiyin (尹亮) discovered that the perf subsystem in the Linux kernel did not properly deallocate memory in some situations. A privileged attacker could use this to cause a denial of service (kernel memory exhaustion). (CVE-2020-25704)

It was discovered that the KVM hypervisor in the Linux kernel did not properly handle interrupts in certain situations. A local attacker in a guest VM could possibly use this to cause a denial of service (host system crash). (CVE-2020-27152)

It was discovered that the jfs file system implementation in the Linux kernel contained an out-of-bounds read vulnerability. A local attacker could use this to possibly cause a denial of service (system crash). (CVE-2020-27815)

It was discovered that an information leak existed in the syscall implementation in the Linux kernel on 32 bit systems. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2020-28588)

It was discovered that the framebuffer implementation in the Linux kernel did not properly perform range checks in certain situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2020-28915)

Jann Horn discovered a race condition in the copy-on-write implementation in the Linux kernel when handling hugepages. A local attacker could use this to gain unintended write access to read-only memory pages. (CVE-2020-29368)

Jann Horn discovered that the mmap implementation in the Linux kernel contained a race condition when handling munmap() operations, leading to a read-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information. (CVE-2020-29369)

Jann Horn discovered that the romfs file system in the Linux kernel did not properly validate file system meta-data, leading to an out-of-bounds read. An attacker could use this to construct a malicious romfs image that, when mounted, exposed sensitive information (kernel memory). (CVE-2020-29371)

Jann Horn discovered that the tty subsystem of the Linux kernel did not use consistent locking in some situations, leading to a read-after-free vulnerability. (CVE-2020-29660)

Jann Horn discovered a race condition in the tty subsystem of the Linux kernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after- free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-29661)

It was discovered that a race condition existed that caused the Linux kernel to not properly restrict exit signal delivery. A local attacker could possibly use this to send signals to arbitrary processes. (CVE-2020-35508)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 20.04 LTS: linux-image-5.6.0-1048-oem 5.6.0-1048.52 linux-image-oem-20.04 5.6.0.1048.44

After a standard system update you need to reboot your computer to make all the necessary changes.

ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.

References: https://usn.ubuntu.com/4752-1 CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437, CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641, CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815, CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369, CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508

Package Information: https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202011-0423",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.5"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.15"
      },
      {
        "model": "aff a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "brocade fabric operating system",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "broadcom",
        "version": null
      },
      {
        "model": "solidfire \\\u0026 hci management node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h610c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.4.229"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.5"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.9.229"
      },
      {
        "model": "a700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.7.6"
      },
      {
        "model": "fas 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.10"
      },
      {
        "model": "fabric-attached storage a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h615c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "aff 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "aff 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas 500f",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire baseboard management controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "a250",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.14.186"
      },
      {
        "model": "aff 500f",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "cloud backup",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h610s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.20"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.4.49"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "2.6.38"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.19.130"
      },
      {
        "model": "fas 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "linux",
        "version": "5.8"
      },
      {
        "model": "kernel",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "linux",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      }
    ],
    "trust": 1.2
  },
  "cve": "CVE-2020-15436",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-15436",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 1.9,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "id": "VHN-168414",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.1,
            "vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 6.7,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 0.8,
            "id": "CVE-2020-15436",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Local",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 6.7,
            "baseSeverity": "Medium",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-15436",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "High",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-15436",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-15436",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202011-1793",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-168414",
            "trust": 0.1,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-15436",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Use-after-free vulnerability in fs/block_dev.c in the Linux kernel before 5.8 allows local users to gain privileges or cause a denial of service by leveraging improper access to a certain error field. Linux Kernel Exists in a vulnerability related to the use of freed memory.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. \n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBug fix:\n\n* RHACM 2.0.8 images (BZ #1915461)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1915461 - RHACM 2.0.8 images\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n\n5. 7.6) - ppc64, ppc64le, x86_64\n\n3. \n\nBug Fix(es):\n\n* [infiniband] Backport Request to fix Multicast Sendonly joins\n(BZ#1937820)\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899804 - CVE-2020-28374 kernel: SCSI target (LIO) write to any block on ILO backstore\n1901168 - CVE-2020-15436 kernel: use-after-free in fs/block_dev.c\n1930078 - CVE-2021-27365 kernel: heap buffer overflow in the iSCSI subsystem\n1930079 - CVE-2021-27363 kernel: iscsi: unrestricted access to sessions and handles\n1930080 - CVE-2021-27364 kernel: out-of-bounds read in libiscsi module\n\n6. 7) - noarch, x86_64\n\n3. \n\nBug Fix(es):\n\n* kernel-rt: update to the latest RHEL7.9.z3 source tree (BZ#1906133)\n\n* [kernel-rt] WARNING: CPU: 8 PID: 586 at kernel/sched/core.c:3644\nmigrate_enable+0x15f/0x210 (BZ#1916123)\n\n* [kernel-rt-debug] [ BUG: bad unlock balance detected! ] [RHEL-7.9.z]\n(BZ#1916130)\n\n4. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: kernel security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2021:0336-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:0336\nIssue date:        2021-02-02\nCVE Names:         CVE-2020-15436 CVE-2020-35513\n====================================================================\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\nSecurity Fix(es):\n\n* kernel: use-after-free in fs/block_dev.c (CVE-2020-15436)\n\n* kernel: Nfsd failure to clear umask after processing an open or create\n(CVE-2020-35513)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* double free issue in filelayout_alloc_commit_info (BZ#1679980)\n\n* Regression: Plantronics Device SHS2355-11 PTT button does not work after\nupdate to 7.7 (BZ#1769502)\n\n* Openstack network node reports unregister_netdevice: waiting for\nqr-3cec0c92-9a to become free. Usage count = 1 (BZ#1809519)\n\n* dlm: add ability to interrupt waiting for acquire POSIX lock (BZ#1826858)\n\n* [Azure][RHEL7] soft lockups and performance loss occurring during final\nfsync with parallel dd writes to xfs filesystem in azure instance\n(BZ#1859364)\n\n* Guest crashed when hotplug vcpus on booting kernel stage (BZ#1866138)\n\n* soft lockup occurs while a thread group leader is waiting on\ntasklist_waiters in mm_update_next_owner() where a huge number of the\nthread group members are exiting and trying to take the tasklist_lock. \n(BZ#1872110)\n\n* [DELL EMC 7.6 BUG] Kioxia CM6 NVMe drive fails to enumerate (BZ#1883403)\n\n* [Hyper-V][RHEL7] Request to included a commit that adds a timeout to\nvmbus_wait_for_unload (BZ#1888979)\n\n* Unable to discover the LUNs from new storage port (BZ#1889311)\n\n* RHEL 7.9 Kernel panic at ceph_put_snap_realm+0x21 (BZ#1890386)\n\n* A hard lockup occurrs where one task is looping in an sk_lock spinlock\nthat has been taken by another task running timespec64_add_ns(). \n(BZ#1890911)\n\n* ethtool/mlx5_core provides incorrect SFP module info (BZ#1896756)\n\n* RHEL7.7 - zcrypt: Fix ZCRYPT_PERDEV_REQCNT ioctl (BZ#1896826)\n\n* RHEL7.7 - s390/dasd: Fix zero write for FBA devices (BZ#1896839)\n\n* [Azure]IP forwarding issue in netvsc[7.9.z] (BZ#1898280)\n\n* Security patch for CVE-2020-25212 breaks directory listings via \u0027ls\u0027 on\nNFS V4.2 shares mounted with selinux enabled labels (BZ#1917504)\n\nEnhancement(s):\n\n* RFE : handle better ERRbaduid on SMB1 (BZ#1847041)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1901168 - CVE-2020-15436 kernel: use-after-free in fs/block_dev.c\n1905208 - CVE-2020-35513 kernel: fix nfsd failure to clear umask after processing an open or create [rhel-7]\n1911309 - CVE-2020-35513 kernel: Nfsd failure to clear umask after processing an open or create\n1917504 - Security patch for CVE-2020-25212 breaks directory listings via \u0027ls\u0027 on NFS V4.2 shares mounted with selinux enabled labels\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nkernel-3.10.0-1160.15.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.15.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.15.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nkernel-3.10.0-1160.15.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.15.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.15.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nkernel-3.10.0-1160.15.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.15.2.el7.noarch.rpm\n\nppc64:\nbpftool-3.10.0-1160.15.2.el7.ppc64.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-bootwrapper-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debug-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-devel-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-headers-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-tools-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.ppc64.rpm\nperf-3.10.0-1160.15.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\npython-perf-3.10.0-1160.15.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\n\nppc64le:\nbpftool-3.10.0-1160.15.2.el7.ppc64le.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debug-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-devel-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-headers-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-tools-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.ppc64le.rpm\nperf-3.10.0-1160.15.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\npython-perf-3.10.0-1160.15.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\n\ns390x:\nbpftool-3.10.0-1160.15.2.el7.s390x.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-debug-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-debuginfo-common-s390x-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-devel-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-headers-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-kdump-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-kdump-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\nkernel-kdump-devel-3.10.0-1160.15.2.el7.s390x.rpm\nperf-3.10.0-1160.15.2.el7.s390x.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\npython-perf-3.10.0-1160.15.2.el7.s390x.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.s390x.rpm\n\nx86_64:\nbpftool-3.10.0-1160.15.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64.rpm\n\nppc64le:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nkernel-3.10.0-1160.15.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.15.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.15.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.15.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.15.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.15.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYBlBsdzjgjWX9erEAQhTZhAAmSFzEZeB0CWYNaJ2PVwoFm4PA9rdYDyg\nG1j/plxrO6bczNEz+XDnAzRPrCbJRPPt6VJxjpLkb25ph0f5tQ+Q7Ph7sAefSbDX\nBLDjjvl+Wd1g2FEfIQ43wDp8UWuFCVVMF3ajJHFz9ROqrA/1hs0gj7ht9gXRlttT\nLSI67A08tEWRPtaf5c1M8h/IJtZiF4sfYDrfhp4mFRTZYybTvVjML+xf69Qq7o2D\nAsxbyKRVNQKC0Epm6C+Tzbw6SxhonrAQyjADWenQ8bCS2TF8WY2OZA7sNs7nddZu\nHa/mCB2vSR2WCWLGxCLXTtsK3y52qPIyUn4mBmatJUIBcbJMnQbgZgWrEcTobsoD\nN5MWdqE6xGjct0KMz0fV6J9D5JWQjUN4O8K0vVQP4aoAX25jMWCq14RLLRUvusJm\ndLI59E5nN1pLMlADiAAh2Iceac/daIF9fvWn2XoF16/ZQNffa0yCiNFaDg+AW4Tg\nZ/b82VoOiz7uJWyv06TMcljafEaIxjpnjGmpKQ2qz8UYoxYYsnRyKpHJxLeiB53A\nTKbkiQJoFutNeUcbBSA6F6sqLlaJ7CtoyzxsVVwM+LtYF1iUXqC+Hp6Gs5NB8WXr\nJQSrrv0X0H7sAu7FHCyL/ygMQK/IiZKiPxiRBZJH6pJz5OL8GVKxR1CSZmHXvgKo\nQPLPtfMOGPs=Hdxh\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. 7) - aarch64, noarch, ppc64le\n\n3. ==========================================================================\nUbuntu Security Notice USN-4752-1\nFebruary 25, 2021\n\nlinux-oem-5.6 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-oem-5.6: Linux kernel for OEM systems\n\nDetails:\n\nDaniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered\nthat legacy pairing and secure-connections pairing authentication in the\nBluetooth protocol could allow an unauthenticated user to complete\nauthentication without pairing credentials via adjacent access. A\nphysically proximate attacker could use this to impersonate a previously\npaired Bluetooth device. (CVE-2020-10135)\n\nJay Shin discovered that the ext4 file system implementation in the Linux\nkernel did not properly handle directory access with broken indexing,\nleading to an out-of-bounds read vulnerability. A local attacker could use\nthis to cause a denial of service (system crash). (CVE-2020-14314)\n\nIt was discovered that the block layer implementation in the Linux kernel\ndid not properly perform reference counting in some situations, leading to\na use-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash). (CVE-2020-15436)\n\nIt was discovered that the serial port driver in the Linux kernel did not\nproperly initialize a pointer in some situations. A local attacker could\npossibly use this to cause a denial of service (system crash). \n(CVE-2020-15437)\n\nAndy Nguyen discovered that the Bluetooth HCI event packet parser in the\nLinux kernel did not properly handle event advertisements of certain sizes,\nleading to a heap-based buffer overflow. A physically proximate remote\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2020-24490)\n\nIt was discovered that the NFS client implementation in the Linux kernel\ndid not properly perform bounds checking before copying security labels in\nsome situations. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2020-25212)\n\nIt was discovered that the Rados block device (rbd) driver in the Linux\nkernel did not properly perform privilege checks for access to rbd devices\nin some situations. A local attacker could use this to map or unmap rbd\nblock devices. (CVE-2020-25284)\n\nIt was discovered that the block layer subsystem in the Linux kernel did\nnot properly handle zero-length requests. A local attacker could use this\nto cause a denial of service. (CVE-2020-25641)\n\nIt was discovered that the HDLC PPP implementation in the Linux kernel did\nnot properly validate input in some situations. A local attacker could use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2020-25643)\n\nKiyin (\u5c39\u4eae) discovered that the perf subsystem in the Linux kernel did\nnot properly deallocate memory in some situations. A privileged attacker\ncould use this to cause a denial of service (kernel memory exhaustion). \n(CVE-2020-25704)\n\nIt was discovered that the KVM hypervisor in the Linux kernel did not\nproperly handle interrupts in certain situations. A local attacker in a\nguest VM could possibly use this to cause a denial of service (host system\ncrash). (CVE-2020-27152)\n\nIt was discovered that the jfs file system implementation in the Linux\nkernel contained an out-of-bounds read vulnerability. A local attacker\ncould use this to possibly cause a denial of service (system crash). \n(CVE-2020-27815)\n\nIt was discovered that an information leak existed in the syscall\nimplementation in the Linux kernel on 32 bit systems. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2020-28588)\n\nIt was discovered that the framebuffer implementation in the Linux kernel\ndid not properly perform range checks in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). \n(CVE-2020-28915)\n\nJann Horn discovered a race condition in the copy-on-write implementation\nin the Linux kernel when handling hugepages. A local attacker could use\nthis to gain unintended write access to read-only memory pages. \n(CVE-2020-29368)\n\nJann Horn discovered that the mmap implementation in the Linux kernel\ncontained a race condition when handling munmap() operations, leading to a\nread-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash) or possibly expose sensitive information. \n(CVE-2020-29369)\n\nJann Horn discovered that the romfs file system in the Linux kernel did not\nproperly validate file system meta-data, leading to an out-of-bounds read. \nAn attacker could use this to construct a malicious romfs image that, when\nmounted, exposed sensitive information (kernel memory). (CVE-2020-29371)\n\nJann Horn discovered that the tty subsystem of the Linux kernel did not use\nconsistent locking in some situations, leading to a read-after-free\nvulnerability. \n(CVE-2020-29660)\n\nJann Horn discovered a race condition in the tty subsystem of the Linux\nkernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after-\nfree vulnerability. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2020-29661)\n\nIt was discovered that a race condition existed that caused the Linux\nkernel to not properly restrict exit signal delivery. A local attacker\ncould possibly use this to send signals to arbitrary processes. \n(CVE-2020-35508)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n  linux-image-5.6.0-1048-oem      5.6.0-1048.52\n  linux-image-oem-20.04           5.6.0.1048.44\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n  https://usn.ubuntu.com/4752-1\n  CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437,\n  CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641,\n  CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815,\n  CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369,\n  CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508\n\nPackage Information:\n  https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "db": "PACKETSTORM",
        "id": "161556"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-15436",
        "trust": 4.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162346",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "161556",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "163248",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "161656",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "161250",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-074-07",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU93656033",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0377",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1148",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2203",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0589",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2604",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1436",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4391",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4341",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0365",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0791",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4377",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0166",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4410",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6112",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0717",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021042828",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062303",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "161258",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "161259",
        "trust": 0.2
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-66297",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-168414",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "db": "PACKETSTORM",
        "id": "161556"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "id": "VAR-202011-0423",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-11-23T19:46:26.104000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Fix\u00a0use-after-free\u00a0in\u00a0blkdev_get()",
        "trust": 0.8,
        "url": "http://www.kernel.org"
      },
      {
        "title": "Linux kernel Remediation of resource management error vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=136428"
      },
      {
        "title": "Red Hat: Moderate: kernel-rt security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210338 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: kernel security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210336 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: kernel-alt security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210354 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.1.3 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210607 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: Vulnerabilities in the Linux Kernel, Samba, Sudo, Python, and tcmu-runner affect IBM Spectrum Protect Plus",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ddbe78143bb073890c2ecb87b35850bf"
      },
      {
        "title": "IBM: Security Bulletin: IBM Data Risk Manager is affected by multiple vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=e9d6f12dfd14652e2bb7e5c28ded162b"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-416",
        "trust": 1.1
      },
      {
        "problemtype": "Use of freed memory (CWE-416) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15436"
      },
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20201218-0002/"
      },
      {
        "trust": 1.8,
        "url": "https://lkml.org/lkml/2020/6/7/379"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu93656033/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-074-07"
      },
      {
        "trust": 0.7,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-the-linux-kernel-samba-sudo-python-and-tcmu-runner-affect-ibm-spectrum-protect-plus/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15436"
      },
      {
        "trust": 0.6,
        "url": "https://source.android.com/security/bulletin/2021-04-01"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4391/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0717"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4377/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4410/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1148"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4341/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021042828"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163248/red-hat-security-advisory-2021-2523-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161250/red-hat-security-advisory-2021-0354-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2203"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062303"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6112"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0365/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0377/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161656/red-hat-security-advisory-2021-0719-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162346/red-hat-security-advisory-2021-1376-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-data-risk-manager-is-affected-by-multiple-vulnerabilities-4/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0589"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1436"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0791"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0166/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/linux-kernel-use-after-free-via-blkdev-get-34039"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161556/ubuntu-security-notice-usn-4752-1.html"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29661"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-35513"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35513"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/errata/rhsa-2021:0338"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/416.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12723"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10878"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0719"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12723"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10543"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2523"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1376"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0354"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29660"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4752-1"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25212"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15437"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25704"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25284"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25643"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14314"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29371"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35508"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "db": "PACKETSTORM",
        "id": "161556"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "db": "PACKETSTORM",
        "id": "161556"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-11-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "date": "2020-11-23T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "date": "2021-07-16T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "date": "2021-03-04T15:33:19",
        "db": "PACKETSTORM",
        "id": "161656"
      },
      {
        "date": "2021-06-22T19:41:34",
        "db": "PACKETSTORM",
        "id": "163248"
      },
      {
        "date": "2021-04-27T15:32:47",
        "db": "PACKETSTORM",
        "id": "162346"
      },
      {
        "date": "2021-02-02T16:13:04",
        "db": "PACKETSTORM",
        "id": "161259"
      },
      {
        "date": "2021-02-02T16:12:50",
        "db": "PACKETSTORM",
        "id": "161258"
      },
      {
        "date": "2021-02-02T16:11:22",
        "db": "PACKETSTORM",
        "id": "161250"
      },
      {
        "date": "2021-02-25T15:31:12",
        "db": "PACKETSTORM",
        "id": "161556"
      },
      {
        "date": "2020-11-23T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "date": "2020-11-23T21:15:11.813000",
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-10-19T00:00:00",
        "db": "VULHUB",
        "id": "VHN-168414"
      },
      {
        "date": "2020-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-15436"
      },
      {
        "date": "2024-03-22T07:12:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      },
      {
        "date": "2022-11-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      },
      {
        "date": "2024-11-21T05:05:33.167000",
        "db": "NVD",
        "id": "CVE-2020-15436"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "161556"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Linux\u00a0Kernel\u00a0 Vulnerability in using free memory in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013950"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "resource management error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-1793"
      }
    ],
    "trust": 0.6
  }
}